Thursday, August 20, 2015

Android 6.0 Marshmallow: Top six features you need to know

After the guessing game that went on for months, Google has finally announced its next Android iteration will be named after the sweet treat Marshmallow. So, now M is for Marshmallow.
Marshmallow was one of the highly speculated name that fits Google’s nomenclature of sweet treats like – Cup Cake, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, KitKat and Lollipop. It beat other probable names like mud pie, mousse, and our very favourite Malai Barfi.
The company revealed the name on its developers blog and alongside also revealed the final Android 6.0 SDK that will be available for download via the SDK Manager in Android Studio. It will bring access to final Android APIs and latest build tools.
“Today with the final Developer Preview update, we’re introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow,” Jamal Eason, Product Manager, Android writes in a blogpost.
Marshmallow brings new platform features such as fingerprint scanner and Doze power saving mode, but along with that also offers new permissions model.
Google Play is also made ready to accept API 23 apps via the Google Play Developer Console. At the consumer launch later this year, the Google Play store will be updated so that the app install and update process supports the new permissions model for apps using API 23.
“Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse — processing Frame objects to return a SparseArray<Barcode> types,” he further adds.
Google has also revaled its new lawn statue similar to the droid seen above.
Needless to say, Android Marshmallow brings new app permissions, custom Chrome Tabs, fingerprint support and improved power management.
Take a look at some of its cool new features announced earlier this year:
App Permissions 
The App Permissions got a major overhaul and Google will allow users to decide which permissions they want to allow or revoke, based on when those particular functions are used. Unlike the current implementation, where users have to agree to all app permissions on first install and also for updates, in Android M, users will get notifications asking for permissions only when they are using a particular function in an app.
Google has identified eight parameters including location, camera, contacts and so on, to help you give permissions for these. So for instance, in WhatsApp if you want to send a voice message, the App Permissions tab will pop up, asking you for permission to use the microphone. You can also revoke the permission later if you so wish. Also, app updates will also not ask you for permissions off the bat, unless you are using a feature which needs you to grant that particular app some permission.
Web Experience: Custom Chrome Tabs
The web browsing experience with the Chrome browser also gets a shot in the arm. Chrome Custom tabs, a new feature, that will let you include webviews within a particular app, without the need to switch to the Chrome browser on your phone. The Chrome browser will run atop your app (in case you click on any link within the app). Features such as automatic sign-in, saved passwords, autofill will work on the apps seamlessly. Also the Chrome Custom tab will take up the colours and fonts of the app it is being opened within, to make it seem like a seamless experience. In principle it seems closer to Facebook’s Instant Articles, with the difference being that the Chrome Custom tabs will make you feel like you are within the app that you are browsing.
App Linking
Android currently supports the app linking system, also known as Intents, which gives you the choice to open a particular web link in a web browser or an app. Before, if you had a Twitter link in say your inbox and you clicked on it, you got a prompt asking if you want to open the link in your browser or within the Twitter app, which is installed on your phone.
Android M will let developers add in an auto-verify feature within their code, which will help open the link within the respective app (provided the app is installed on your phone). In the background, the Android M OS will verify the link with the app’s server and post-authentication will proceed to open the link within the app itself, without asking you where you want to open the link.
Android Pay
This feature will let you make your payments using near-field communication (NFC) and host card emulation techniques for tap-to-pay services. You just need to unlock your phone, keep it near an NFC terminal and your payment is done, without opening any app. Google says when you add in your card details, a virtual account number is created to make your payments. Your actual card number is not shared with the store during the transaction.
According to Google, Android Pay will be pre-installed on AT&T, Verizon and T-Mobile devices and will be accepted in around 700,000 stores in the US which accept contact-less payment. Android Pay will replace the Google Wallet app. Android Pay can also be used to make in-app payments provided developers integrate Pay into their apps.
Fingerprint Support
Android M will standardise the fingerprint sensor support and it is working with various phones to make a standard API to go with their sensors. You can use your fingerprint to authorise an Android Pay transaction, unlock your device or make Play Store purchases.
Power management
Android M will feature a smart power-managing feature called Doze. This feature works by letting system optimally manage the background processes. The OS keeps a tab on the motion detection sensor and if there is no activity for a long time, the system shuts down some processes. Since it is in the Doze stage, the system can still get activated by alarms and high priority notifications. According to Google, this feature has helped increase the standby-time on the Nexus 9 by almost two times over the Android 5.0 Lollipop.
Android M will also support USB Type-C for charging. And considering USB Type-C is has a bi-directional port, you can use this port to either charge the phone as welll as charge another device.
Apart from these main features, some of the other improvements include a better implementation of Copy/Paste function. So in Android M, you will get a floating toolbar just above your selection with the Cut, Copy, Paste options. Direct Share feature will let you share images or links with your most frequently shared contacts or apps, using a single click. Volume controls will also give you a drop-down menu, a feature that is common on the Cyanogen OS.

Android API Differences Between 22 and 23

This report details the changes in the core Android framework API between two API Level specifications. It shows additions, modifications, and removals for packages, classes, methods, and fields. The report also includes general statistics that characterize the extent and type of the differences.
This report is based a comparison of the Android API specifications whose API Level identifiers are given in the upper-right corner of this page. It compares a newer "to" API to an older "from" API, noting all changes relative to the older API. So, for example, API elements marked as removed are no longer present in the "to" API specification.
To navigate the report, use the "Select a Diffs Index" and "Filter the Index" controls on the left. The report uses text formatting to indicate interface names, links to reference documentation, and links to change description. The statistics are accessible from the "Statistics" link in the upper-right corner.
For more information about the Android framework API and SDK, see the Android Developers site.

Removed Packages
org.apache.commons.logging
org.apache.http
org.apache.http.auth
org.apache.http.auth.params
org.apache.http.client
org.apache.http.client.entity
org.apache.http.client.methods
org.apache.http.client.params
org.apache.http.client.protocol
org.apache.http.client.utils
org.apache.http.conn.params
org.apache.http.conn.routing
org.apache.http.conn.util
org.apache.http.cookie
org.apache.http.cookie.params
org.apache.http.entity
org.apache.http.impl
org.apache.http.impl.auth
org.apache.http.impl.client
org.apache.http.impl.conn
org.apache.http.impl.conn.tsccm
org.apache.http.impl.cookie
org.apache.http.impl.entity
org.apache.http.impl.io
org.apache.http.io
org.apache.http.message
org.apache.http.protocol
org.apache.http.util
 
Added Packages
android.app.assist
android.hardware.fingerprint
android.media.midi
android.security.keystore
android.service.chooser
 
Changed Packages
android
android.accounts
android.app
android.app.admin
android.app.usage
android.bluetooth
android.bluetooth.le
android.content
android.content.pm
android.content.res
android.database
android.graphics
android.graphics.drawable
android.hardware
android.hardware.camera2
android.hardware.camera2.params
android.hardware.usb
android.media
android.media.browse
android.media.session
android.media.tv
android.net
android.net.http
android.net.wifi
android.nfc
android.os
android.print
android.printservice
android.provider
android.renderscript
android.security
android.service.carrier
android.service.dreams
android.service.media
android.service.notification
android.service.voice
android.speech
android.speech.tts
android.system
android.telecom
android.telephony
android.test.mock
android.text
android.transition
android.util
android.view
android.view.accessibility
android.webkit
android.widget
org.apache.http.conn
org.apache.http.conn.scheme
org.apache.http.params

New features in Android 6.0 API (API level 23)

The M Developer Preview gives you an advance look at the upcoming release for the Android platform, which offers new features for users and app developers. This document provides an introduction to the most notable APIs.
The M Developer Preview 3 release includes the final APIs for Android 6.0 (API level 23). If you are preparing an app for use on Android 6.0, download the latest SDK and to complete your final updates and release testing. You can review the final APIs in the API Reference and see the API differences in the Android API Differences Report.
Important: You may now publish apps that target Android 6.0 (API level 23) to the Google Play store.
Note: If you have been working with previous preview releases and want to see the differences between the final API and previous preview versions, download the additional difference reports included in the preview docs reference.

Important behavior changes

If you have previously published an app for Android, be aware that your app might be affected by changes in the platform.
Please see Behavior Changes for complete information.

App Linking


This preview enhances Android’s intent system by providing more powerful app linking. This feature allows you to associate an app with a web domain you own. Based on this association, the platform can determine the default app to use to handle a particular web link and skip prompting users to select an app. To learn how to implement this feature, see App Linking.

Auto Backup for Apps


The system now performs automatic full data backup and restore for apps. For the duration of the M Developer Preview program, all apps are backed up, independent of which SDK version they target. After the final M SDK release, your app must target M to enable this behavior; you do not need to add any additional code. If users delete their Google accounts, their backup data is deleted as well. To learn how this feature works and how to configure what to back up on the file system, see Auto Backup for Apps.

Authentication


This preview offers new APIs to let you authenticate users by using their fingerprint scans on supported devices, and check how recently the user was last authenticated using a device unlocking mechanism (such as a lockscreen password). Use these APIs in conjunction with the Android Keystore system.

Fingerprint Authentication

To authenticate users via fingerprint scan, get an instance of the new FingerprintManager class and call the authenticate() method. Your app must be running on a compatible device with a fingerprint sensor. You must implement the user interface for the fingerprint authentication flow on your app, and use the standard Android fingerprint icon in your UI. The Android fingerprint icon (c_fp_40px.png) is included in the sample app. If you are developing multiple apps that use fingerprint authentication, note that each app must authenticate the user’s fingerprint independently.
To use this feature in your app, first add the USE_FINGERPRINT permission in your manifest.
<uses-permission
        android:name="android.permission.USE_FINGERPRINT" />
To see an app implementation of fingerprint authentication, refer to the Fingerprint Dialog sample. For a demonstration of how you can use these authentication APIs in conjunction with other Android APIs, see the video Fingerprint and Payment APIs.
If you are testing this feature, follow these steps:
  1. Install Android SDK Tools Revision 24.3, if you have not done so.
  2. Enroll a new fingerprint in the emulator by going to Settings > Security > Fingerprint, then follow the enrollment instructions.
  3. Use an emulator to emulate fingerprint touch events with the following command. Use the same command to emulate fingerprint touch events on the lockscreen or in your app.
    adb -e emu finger touch <finger_id>
    On Windows, you may have to run telnet 127.0.0.1 <emulator-id> followed by finger touch <finger_id>.

Confirm Credential

Your app can authenticate users based on how recently they last unlocked their device. This feature frees users from having to remember additional app-specific passwords, and avoids the need for you to implement your own authentication user interface. Your app should use this feature in conjunction with a public or secret key implementation for user authentication.
To set the timeout duration for which the same key can be re-used after a user is successfully authenticated, call the new setUserAuthenticationValidityDurationSeconds() method when you set up a KeyGenerator or KeyPairGenerator.
Avoid showing the re-authentication dialog excessively -- your apps should try using the cryptographic object first and if the the timeout expires, use the createConfirmDeviceCredentialIntent() method to re-authenticate the user within your app.
To see an app implementation of this feature, refer to the Confirm Credential sample.

Direct Share


This preview provides you with APIs to make sharing intuitive and quick for users. You can now define direct share targets that launch a specific activity in your app. These direct share targets are exposed to users via the Share menu. This feature allows users to share content to targets, such as contacts, within other apps. For example, the direct share target might launch an activity in another social network app, which lets the user share content directly to a specific friend or community in that app.
To enable direct share targets you must define a class that extends the ChooserTargetService class. Declare your service in the manifest. Within that declaration, specify the BIND_CHOOSER_TARGET_SERVICE permission and an intent filter using the SERVICE_INTERFACE action.
The following example shows how you might declare the ChooserTargetService in your manifest.
<service android:name=".ChooserTargetService"
        android:label="@string/service_name"
        android:permission="android.permission.BIND_CHOOSER_TARGET_SERVICE">
    <intent-filter>
        <action android:name="android.service.chooser.ChooserTargetService" />
    </intent-filter>
</service>
For each activity that you want to expose to ChooserTargetService, add a <meta-data> element with the name "android.service.chooser.chooser_target_service" in your app manifest.
<activity android:name=".MyShareActivity”
        android:label="@string/share_activity_label">
    <intent-filter>
        <action android:name="android.intent.action.SEND" />
    </intent-filter>
<meta-data
        android:name="android.service.chooser.chooser_target_service"
        android:value=".ChooserTargetService" />
</activity>

Voice Interactions


This preview provides a new voice interaction API which, together with Voice Actions, allows you to build conversational voice experiences into your apps. Call the isVoiceInteraction() method to determine if a voice action triggered your activity. If so, your app can use the VoiceInteractor class to request a voice confirmation from the user, select from a list of options, and more.
Most voice interactions originate from a user voice action. A voice interaction activity can also, however, start without user input. For example, another app launched through a voice interaction can also send an intent to launch a voice interaction. To determine if your activity launched from a user voice query or from another voice interaction app, call the isVoiceInteractionRoot() method. If another app launched your activity, the method returns false. Your app may then prompt the user to confirm that they intended this action.
To learn more about implementing voice actions, see the Voice Actions developer site.

Assist API


This preview offers a new way for users to engage with your apps through an assistant. To use this feature, the user must enable the assistant to use the current context. Once enabled, the user can summon the assistant within any app, by long-pressing on the Home button.
Your app can elect to not share the current context with the assistant by setting the FLAG_SECURE flag. In addition to the standard set of information that the platform passes to the assistant, your app can share additional information by using the new AssistContent class.
To provide the assistant with additional context from your app, follow these steps:
  1. Implement the Application.OnProvideAssistDataListener interface.
  2. Register this listener by using registerOnProvideAssistDataListener().
  3. In order to provide activity-specific contextual information, override the onProvideAssistData() callback and, optionally, the new onProvideAssistContent() callback.

Notifications


This preview adds the following API changes for notifications:

Bluetooth Stylus Support


This preview provides improved support for user input using a Bluetooth stylus. Users can pair and connect a compatible Bluetooth stylus with their phone or tablet. While connected, position information from the touch screen is fused with pressure and button information from the stylus to provide a greater range of expression than with the touch screen alone. Your app can listen for stylus button presses and perform secondary actions, by registering View.OnContextClickListener and GestureDetector.OnContextClickListener objects in your activity.
Use the MotionEvent methods and constants to detect stylus button interactions:

Improved Bluetooth Low Energy Scanning


If your app performs performs Bluetooth Low Energy scans, use the new setCallbackType() method to specify that you want the system to notify callbacks when it first finds, or sees after a long time, an advertisement packet matching the set ScanFilter. This approach to scanning is more power-efficient than what’s provided in the previous platform version.

Hotspot 2.0 Release 1 Support


This preview adds support for the Hotspot 2.0 Release 1 spec on Nexus 6 and Nexus 9 devices. To provision Hotspot 2.0 credentials in your app, use the new methods of the WifiEnterpriseConfig class, such as setPlmn() and setRealm(). In the WifiConfiguration object, you can set the FQDN and the providerFriendlyName fields. The new isPasspointNetwork() method indicates if a detected network represents a Hotspot 2.0 access point.

4K Display Mode


The platform now allows apps to request that the display resolution be upgraded to 4K rendering on compatible hardware. To query the current physical resolution, use the new Display.Mode APIs. If the UI is drawn at a lower logical resolution and is upscaled to a larger physical resolution, be aware that the physical resolution the getPhysicalWidth() method returns may differ from the logical resolution reported by getSize().
You can request the system to change the physical resolution in your app as it runs, by setting the preferredDisplayModeId property of your app’s window. This feature is useful if you want to switch to 4K display resolution. While in 4K display mode, the UI continues to be rendered at the original resolution (such as 1080p) and is upscaled to 4K, but SurfaceView objects may show content at the native resolution.

Themeable ColorStateLists


Theme attributes are now supported in ColorStateList for devices running the M Preview. The getColorStateList() and getColor() methods have been deprecated. If you are calling these APIs, call the new getColorStateList() or getColor() methods instead. These methods are also available in the v4 appcompat library via ContextCompat.

Audio Features


This preview adds enhancements to audio processing on Android, including:
  • Support for the MIDI protocol, with the new android.media.midi APIs. Use these APIs to send and receive MIDI events.
  • New AudioRecord.Builder and AudioTrack.Builder classes to create digital audio capture and playback objects respectively, and configure audio source and sink properties to override the system defaults.
  • API hooks for associating audio and input devices. This is particularly useful if your app allows users to start a voice search from a game controller or remote control connected to Android TV. The system invokes the new onSearchRequested() callback when the user starts a search. To determine if the user's input device has a built-in microphone, retrieve the InputDevice object from that callback, then call the new hasMicrophone() method.
  • New getDevices() method which lets you retrieve a list of all audio devices currently connected to the system. You can also register an AudioDeviceCallback object if you want the system to notify your app when an audio device connects or disconnects.

Video Features


This preview adds new capabilities to the video processing APIs, including:
  • New MediaSync class which helps applications to synchronously render audio and video streams. The audio buffers are submitted in non-blocking fashion and are returned via a callback. It also supports dynamic playback rate.
  • New EVENT_SESSION_RECLAIMED event, which indicates that a session opened by the app has been reclaimed by the resource manager. If your app uses DRM sessions, you should handle this event and make sure not to use a reclaimed session.
  • New ERROR_RECLAIMED error code, which indicates that the resource manager reclaimed the media resource used by the codec. With this exception, the codec must be released, as it has moved to terminal state.
  • New getMaxSupportedInstances() interface to get a hint for the max number of the supported concurrent codec instances.
  • New setPlaybackParams() method to set the media playback rate for fast or slow motion playback. It also stretches or speeds up the audio playback automatically in conjunction with the video.

Camera Features


This preview includes the following new APIs for accessing the camera’s flashlight and for camera reprocessing of images:

Flashlight API

If a camera device has a flash unit, you can call the setTorchMode() method to switch the flash unit’s torch mode on or off without opening the camera device. The app does not have exclusive ownership of the flash unit or the camera device. The torch mode is turned off and becomes unavailable whenever the camera device becomes unavailable, or when other camera resources keeping the torch on become unavailable. Other apps can also call setTorchMode() to turn off the torch mode. When the last app that turned on the torch mode is closed, the torch mode is turned off.
You can register a callback to be notified about torch mode status by calling the registerTorchCallback() method. The first time the callback is registered, it is immediately called with the torch mode status of all currently known camera devices with a flash unit. If the torch mode is turned on or off successfully, the onTorchModeChanged() method is invoked.

Reprocessing API

The Camera2 API is extended to support YUV and private opaque format image reprocessing. To determine if these reprocessing capabilities are available, call getCameraCharacteristics() and check for the REPROCESS_MAX_CAPTURE_STALL key. If a device supports reprocessing, you can create a reprocessable camera capture session by calling createReprocessableCaptureSession(), and create requests for input buffer reprocessing.
Use the ImageWriter class to connect the input buffer flow to the camera reprocessing input. To get an empty buffer, follow this programming model:
  1. Call the dequeueInputImage() method.
  2. Fill the data into the input buffer.
  3. Send the buffer to the camera by calling the queueInputImage() method.
If you are using a ImageWriter object together with an PRIVATE image, your app cannot access the image data directly. Instead, pass the PRIVATE image directly to the ImageWriter by calling the queueInputImage() method without any buffer copy.
The ImageReader class now supports PRIVATE format image streams. This support allows your app to maintain a circular image queue of ImageReader output images, select one or more images, and send them to the ImageWriter for camera reprocessing.

Android for Work Features


This preview includes the following new APIs for Android for Work:
  • Enhanced controls for Corporate-Owned, Single-Use devices: The Device Owner can now control the following settings to improve management of Corporate-Owned, Single-Use (COSU) devices:
  • Silent install and uninstall of apps by Device Owner: A Device Owner can now silently install and uninstall applications using the PackageInstaller APIs, independent of Google Play for Work. You can now provision devices through a Device Owner that fetches and installs apps without user interaction. This feature is useful for enabling one-touch provisioning of kiosks or other such devices without activating a Google account.
  • Silent enterprise certificate access: When an app calls choosePrivateKeyAlias(), prior to the user being prompted to select a certificate, the Profile or Device Owner can now call the onChoosePrivateKeyAlias() method to provide the alias silently to the requesting application. This feature lets you grant managed apps access to certificates without user interaction.
  • Auto-acceptance of system updates. By setting a system update policy with setSystemUpdatePolicy(), a Device Owner can now auto-accept a system update, for instance in the case of a kiosk device, or postpone the update and prevent it being taken by the user for up to 30 days. Furthermore, an administrator can set a daily time window in which an update must be taken, for example during the hours when a kiosk device is not in use. When a system update is available, the system checks if the Work Policy Controller app has set a system update policy, and behaves accordingly.
  • Delegated certificate installation: A Profile or Device Owner can now grant a third-party app the ability to call these DevicePolicyManager certificate management APIs:
  • Data usage tracking. A Profile or Device Owner can now query for the data usage statistics visible in Settings > Data usage by using the new NetworkStatsManager methods. Profile Owners are automatically granted permission to query data on the profile they manage, while Device Owners get access to usage data of the managed primary user.
  • Runtime permission management: A Profile or Device Owner can set a permission policy for all runtime requests of all applications using setPermissionPolicy(), to either prompt the user to grant the permission or automatically grant or deny the permission silently. If the latter policy is set, the user cannot modify the selection made by the Profile or Device Owner within the app’s permissions screen in Settings.
  • VPN in Settings: VPN apps are now visible in Settings > More > VPN. Additionally, the notifications that accompany VPN usage are now specific to how that VPN is configured. For Profile Owner, the notifications are specific to whether the VPN is configured for a managed profile, a personal profile, or both. For a Device Owner, the notifications are specific to whether the VPN is configured for the entire device.
  • Work status notification: A status bar briefcase icon now appears whenever an app from the managed profile has an activity in the foreground. Furthermore, if the device is unlocked directly to the activity of an app in the managed profile, a toast is displayed notifying the user that they are within the work profile.
For a detailed view of all API changes in the M Developer Preview, see the API Differences Report

Saturday, August 15, 2015

5 Tips to Properly Style Xamarin.Forms Apps



Xamarin.Forms’ default templates for applications are great! They give you everything you need to get up and running including all three of your app projects and your shared code project. Nothing is faster than this to get a cross-platform application spun up. One thing they don’t do by default is setup nice themes for your applications, in fact they simply use the default device themes, but we can do better. I wanted to take a quick second to share a few tips on making your Xamarin.Forms app beautiful with just a few lines of code.

Tip 0: Pick a color palette

I had to start my tips at 0 because this tip is so important for your application. Your application needs an identity and a base to start with… a palette that you can pick from.
Take a hint from Google and browse through their Color guidelines and then use http://www.materialpalette.com/ to pick your colors. Use these not only in the app specific theme for Android, but throughout your Xamarin.Forms applications using Styles.

Tip 1: Use a Light Theme

Light themes are iOS’s default theme. Don’t fight it, embrace it. By embracing the light theme there is no need to ever set any custom colors in your application. There is a bit of setup required to get this going on each platform using your new palette.
Android
I have covered this in the past so I wont repost the code, but GO READ THIS POST!! Understanding Android app themes is very important. By applying default Light Holo and Material themes to your application you can completely transform your application.
iOS
There is very little that you need to do to optimize iOS applications since the core theme is already light. The only thing to update is the Status Bar color, which I blogged about earlier, and the title bar color itself. You can accomplish this by opening our AppDelegate.cs and applying a few lines of code in the FinishedLaunching:
view raw AppDelegate.cs hosted with ❤ by GitHub
Windows Phone
Let’s not forget about Windows Phone as it is the most interesting. By default Windows Phone (silverlight) apps change their themes based on the user preference. You will normally see a dark theme as this is the phone default, but there is a nifty NuGet package from my hero Jeff Wilcox that enables you to manually force a theme for your application. Follow his nifty guide to get you up and running.
Side Tip: On a NavigationPage you may want to set the BarTextColor to White or a shade of black

Tip 2: Don’t set background colors

Unless you decided to ignore Tip 1 and need to go with a custom dark theme there is no need to ever set a background color in Xamarin.Forms for an entire page. You could set it for portion of the page or perhaps strips, but never the full page. Let the main applications theme work for you. This will ensure your font colors work correctly out of the box.

Tip 3: Don’t Use Custom Fonts

Don’t do it unless you REALLY REALLY have to. There is no need to put a comic sans font in your application. Let the platforms work for you. iOS does have plenty of great fonts built right in and I do recommend you customize your iOS apps using these, however let Android’s Roboto and Windows Phone’s Segeo WP work for you.

Tip 4: Padding & Spacing

This tip requires a bit more of an eye for design and a bit of energy. Each platform has some specific requirements around the default padding and spacing that is between controls on the page. Android uses a 8dp baseline grid for components 4dp baseline grid for type alignment.
Windows Phone on the other hand uses a 10 pixel spacing grid between everything on the screen. Take a look at Jeff Wilcox’s Windows Phone MetroGridHelper to enable an overlay to help with spacing.
You can use Xamarin.Forms Device.OS and OnPlatform to do custom tweaks to your Padding.

Source: http://goo.gl/xyf2nh


Xamarin.Forms in Anger – Cards


Screen Shot 2015-07-15 at 12.08.48 PM

The #1 request I get for the Xamarin.Forms in Anger is for a Google or Facebook like card view. Today is the day that I fulfill those requests.
To be completely honest, I’ve been thinking about how to do a card view for a while and let me tell you, it’s not easy. Especially when you try to stick with the “in the box” components of Xamarin.Forms. My first attempt at it was hideous.

Hideous Forms

Hideous would be the most polite way of saying it was crap. I started off using BoxViews to draw the lines and contain the whole thing in a Xamarin.Forms Grid. Ouch, yes hideous.
The Grid was becoming a nightmare with all the extra rows and columns needed for lines. What I wouldn’t do for some HTML Tables with a borders = 1 in Xamarin.Forms. I thought of putting in a feature request to the team, but didn’t. I don’t want them to laugh at me. 😉

A Grid of Lines

I worked up a Xamarin.Forms Grid with some padding, row and column spacing. In addition to those settings, I also set the background color of the grid to a medium dark grey. Then added ContentViews with white backgrounds into each cell of the Grid. This is what it looked like when I finished.
GridLines
The white panels acted like the ketchup on my waffle fries and the borders are the potato. Yes, I know, it’s strange to describe Xamarin.Forms design techniques using food, but stay with me; it gets better. Now that I knew the technique worked and was a heck of a lot less messy, I pushed on.

Content View Panels

Each CardView is made up of 5 ContentViews with a background color set to the designed color. Here are some of the background colors for each panel.
Panels
The CardView.cs file would have been gigantic if I kept all the code in the same file so I broke it up into different ContentViews. Here are the names for each ContentView that makes up the CardView.
CardViews
The CardDetailsView has a column span of three to horizontally bridge the two IconLableViews and the ConfigIconView. The CardStatusView has a row span of two to complete the vertical lefthand status bar.

The CardView Code

For the demo, I put a couple of CardViews in a vertical StackLayout. If I had a lot of cards, I would prefer to use a ListView. Hopefully a reader will let me know how well this design works in a ListView. It should be ok, especially while using the Grid and not a bunch of nested StackLayouts. I’ve learned my lesson.
With all the refactoring; the CardView looks small and plain but the devil is in the ContentViews.

Solution

The code for this sample is up on GitHub in the “In Anger” repository. For this post, I broke the code out into a separate directory and solution to make it easier for you to reuse the CardView without getting confused with all the other code and resources from other posts.

Source: goo.gl/aCHqyu