As the role of mobile devices in people's lives expands even further, mobile app developers have become a driving force for software innovation. At Microsoft, we are working to enable even greater developer innovation by providing the best experiences to all developers, on any device, with powerful tools, an open platform and a global cloud.
As part of this commitment I am pleased to announce today that Microsoft has signed an agreement to acquire Xamarin, a leading platform provider for mobile app development.
In conjunction with Visual Studio, Xamarin provides a rich mobile development offering that enables developers to build mobile apps using C# and deliver fully native mobile app experiences to all major devices – including iOS, Android, and Windows. Xamarin’s approach enables developers to take advantage of the productivity and power of .NET to build mobile apps, and to use C# to write to the full set of native APIs and mobile capabilities provided by each device platform. This enables developers to easily share common app code across their iOS, Android and Windows apps while still delivering fully native experiences for each of the platforms. Xamarin’s unique solution has fueled amazing growth for more than four years.
Xamarin has more than 15,000 customers in 120 countries, including more than one hundred Fortune 500 companies - and more than 1.3 million unique developers have taken advantage of their offering. Top enterprises such as Alaska Airlines, Coca-Cola Bottling, Thermo Fisher, Honeywell and JetBlue use Xamarin, as do gaming companies like SuperGiant Games and Gummy Drop. Through Xamarin Test Cloud, all types of mobile developers—C#, Objective-C, Java and hybrid app builders —can also test and improve the quality of apps using thousands of cloud-hosted phones and devices. Xamarin was recently named one of the top startups that help run the Internet.
Microsoft has had a longstanding partnership with Xamarin, and have jointly built Xamarin integration into Visual Studio, Microsoft Azure, Office 365 and our Enterprise Mobility Suite to provide developers with an end-to-end workflow for native, secure apps across platforms. We have also worked closely together to offer the training, tools, services and workflows developers need to succeed.
With today’s acquisition announcement we will be taking this work much further to make our world class developer tools and services even better with deeper integration and enable seamless mobile app dev experiences. The combination of Xamarin, Visual Studio, Visual Studio Team Services, and Azure delivers a complete mobile app dev solution that provides everything a developer needs to develop, test, deliver and instrument mobile apps for every device. We are really excited to see what you build with it.
We are looking forward to providing more information about our plans in the near future – starting at the Microsoft //Build conference coming up in a few weeks, followed by Xamarin Evolve in late April. Be sure to watch my Build keynote and get a front row seat at Evolve to learn more!
Source: https://weblogs.asp.net/scottgu/welcoming-the-xamarin-team-to-microsoft
News, Tips, Tutorials, for Cross Platform (android, iOS and Windows phone) App develpment using Xamarin platform
Categories
- Android (4)
- Android API level 23 marshmallow (3)
- Azure Mobile Apps (1)
- Learning Tools (1)
- News (5)
- Themeing (2)
- Tips & Tricks (2)
- Xamarin Tips & Tricks (4)
- Xamarin.Forms (3)
Wednesday, February 24, 2016
Monday, February 22, 2016
Xamarin vs. Hybrid HTML: Making the Right Choice for the Enterprise
We want to thank Kevin Ford at Magenic for helping us present a thorough comparison of cross-platform native vs. hybrid HTML approaches for mobile development for the enterprise.
Kevin’s team built a functionally identical sample app utilizing Xamarin and hybrid HTML (in this case Cordova) to understand the differences in user experience, performance, developer experience, and TCO.
Some of the key findings:
Hybrid HTML approaches couldn’t deliver on key functionality without previous knowledge of Objective-C and Java to write custom, platform proprietary plugins
Cross-platform native apps started 25% faster and loaded large datasets 62% faster
Cross-platform native apps utilized 50% less memory and 76% less CPU time
During development, Hybrid HTML apps compiled faster, and app sizes were smaller
Hybrid HTML did have higher code reuse, but was not able to deliver required functionality in the required timeframe of 6 weeks
We have made the presentation available for everyone to view below.
On-Demand Recording
The webinar recording is available below and on YouTube if you weren’t able to catch it live, or if you want to forward on to your colleagues. There are several demos showing the differences between Xamarin and hybrid mobile that you’ll want to see.
Slides
Many of you also requested the presentation slides, which you can find here. They don’t have the demos and screenshots from the video, but they’re a good conversation starter or building blocks for your own presentations.
Q&A
Q: How does Xamarin.Forms promise 90+% code re-use?
A: Xamarin.Forms provides a UI framework for describing the layout of a screen element, which can be defined in either C# or XAML (XML syntax). At runtime, each page and its controls are mapped to platform-specific native user interface elements, so it renders native platform UI while still offering native performance. All business logic and backend code code is also completely reused between platforms.
Q: How does Xamarin compare to other mobile cross-platform native frameworks?
A: There are other frameworks that also take a cross-platform native approach. RoboVM does this with Java; in 2015 they became part of the Xamarin family so for Java developers this might be an alternative.
Other cross-platform native approaches utilize JavaScript to define UI and business logic and translate that to native controls. Although many developers know JavaScript, testing and maintaining mobile apps built on an interpreted language can be more challenging. While solutions like Xamarin Test Cloud can help, you’ll also need to test for errors that could have been caught during compilation.
Secondly, Xamarin has a big ecosystem of developers, components, and partners. There are over 1.6M C# developers worldwide, with thousands of components available from NuGet, the Xamarin Component Store, the ability to bind to CocoaPods, and 100% API access to iOS, Android, and Windows. There’s little you can’t do, and if you need help either our support or community sites like StackOverflow will usually have an answer.
Q: Are there any situations where hybrid HTML should be your first choice?
A: If your developer’s primary skill set is in web development, they may initially be more productive utilizing a hybrid HTML approach. However, you need to really think about future-proofing your app to stay on top of changes in iOS, Android, and Windows, and whether you’ll need more advanced access to a device’s sensors or the platform APIs, such as iOS’ 3D Touch, payments, or fingerprint recognition for example.
These features require contributions by a community to provide the necessary plugins, which sometimes require enhancement and testing, or else you’ll need to write your own custom extensions with Objective-C, Swift, or Java.
Xamarin, on the other hand, provides 100% API access to all the platforms, as well as many cross-platform plugins you can utilize. We also have world-class training through Xamarin University to accelerate development and teach best practices.
Source: https://blog.xamarin.com/webinar-recording-xamarin-vs-hybrid-html-making-the-right-choice-for-the-enterprise/?utm_source=newsletter&utm_medium=email&utm_content=hybrid-webinar-link&utm_campaign=march2016&mkt_tok=3RkMMJWWfF9wsRolu6%2FAZKXonjHpfsX56eUrX6G%2Bi4kz2EFye%2BLIHETpodcMT8tmN6%2BTFAwTG5toziV8R7nCKc1q1c0QXBfr
Tuesday, September 8, 2015
Creating Mobile Apps with Xamarin.Forms Book Preview 2
This is a very helpful free book to learn xamarin in both Visual studio and Xamarin Studio. It covers all three platforms(iOS, Android and windows phone)
Link for free Download:
Links for individualy download of Chapters:
Chapter 1. How Does Xamarin.Forms Fit In? |
Download PDF (released Feb. 3) |
Chapter 2. Anatomy of an App |
Download PDF (released Feb. 3) |
Chapter 3. Deeper into Text |
Download PDF (released Feb. 3) |
Chapter 4. Scrolling the Stack |
Download PDF (released Feb. 3) |
Chapter 5. Dealing with Sizes |
Download PDF (released Feb. 3) |
Chapter 6. Button Clicks |
Download PDF (released Feb. 3) |
Chapter 7. XAML vs. Code |
Download PDF (released Feb. 3) |
Chapter 8. Code and XAML in Harmony |
Download PDF (released Feb. 3) |
Chapter 9. Platform-Specific API Calls |
Download PDF (released Feb. 13) |
Chapter 10. XAML Markup Extensions |
Download PDF (released Feb. 20) |
Chapter 11. The Bindable Infrastructure |
Download PDF (released Feb. 27) |
Chapter 12. Styles |
Download PDF (released Mar. 6) |
Chapter 13. Bitmaps |
Download PDF (released Mar. 13) |
Chapter 14. Absolute Layout |
Download PDF (updated Mar. 22) |
Chapter 15. The Interactive Interface |
Download PDF (released Mar. 27) |
Chapter 16. Data Binding |
Download PDF (released Apr. 3) |
Chapter 17. Mastering the Grid |
Download PDF (released Apr. 10) |
Chapter 18. MVVM |
Download PDF (released Apr. 17) |
Chapter 19. Collection Views |
Download PDF (updated July 28) |
Chapter 20. Async and File I/O |
Download PDF (released June 5) |
Chapter 21. Transforms |
Download PDF (released June 19) |
Chapter 22. Animation |
Download PDF (released July 17) |
Chapter 23. Triggers and Behaviors |
Download PDF (released August 14) |
How to enable Hyper-V in Windows for installing emulator
When your computer and BIOS settings are
already configured to support Hyper-V, the setup program for the SDK
enables and starts Hyper-V. If you are already a local administrator on
the computer, setup also adds you to the Hyper-V Administrators group.
Otherwise you may have to enable these prerequisites manually.
If the Hyper-V options are not available, your computer probably doesn’t support Hyper-V, possibly because it doesn’t support SLAT.
For more information about the Windows Features dialog box, see Turn Windows Features On or Off.
If the Hyper-V options are not available, your computer probably doesn’t support Hyper-V, possibly because it doesn’t support SLAT.
To enable Hyper-V in Windows

- In Control Panel, click Programs, and then click Turn Windows features on or off.
- In the Windows Features dialog box, click Hyper-V. The list of options expands.
- In the expanded list of options, select at least the Hyper-V Platform check box, and then click OK.
Thursday, August 20, 2015
Android 6.0 Marshmallow: Top six features you need to know
After the guessing game
that went on for months, Google has finally announced its next Android
iteration will be named after the sweet treat Marshmallow. So, now M is
for Marshmallow.
Marshmallow was one of the highly speculated name that fits Google’s nomenclature of sweet treats like – Cup Cake, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, KitKat and Lollipop. It beat other probable names like mud pie, mousse, and our very favourite Malai Barfi.
The company revealed the name on its developers blog and alongside also revealed the final Android 6.0 SDK that will be available for download via the SDK Manager in Android Studio. It will bring access to final Android APIs and latest build tools.
“Today with the final Developer Preview update, we’re introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow,” Jamal Eason, Product Manager, Android writes in a blogpost.
Marshmallow brings new platform features such as fingerprint scanner and Doze power saving mode, but along with that also offers new permissions model.
Google Play is also made ready to accept API 23 apps via the Google Play Developer Console. At the consumer launch later this year, the Google Play store will be updated so that the app install and update process supports the new permissions model for apps using API 23.
“Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse — processing Frame objects to return a SparseArray<Barcode> types,” he further adds.
Google has also revaled its new lawn statue similar to the droid seen above.
Needless to say, Android Marshmallow brings new app permissions, custom Chrome Tabs, fingerprint support and improved power management.
Take a look at some of its cool new features announced earlier this year:
App Permissions
The App Permissions got a major overhaul and Google will allow users to decide which permissions they want to allow or revoke, based on when those particular functions are used. Unlike the current implementation, where users have to agree to all app permissions on first install and also for updates, in Android M, users will get notifications asking for permissions only when they are using a particular function in an app.
Google has identified eight parameters including location, camera, contacts and so on, to help you give permissions for these. So for instance, in WhatsApp if you want to send a voice message, the App Permissions tab will pop up, asking you for permission to use the microphone. You can also revoke the permission later if you so wish. Also, app updates will also not ask you for permissions off the bat, unless you are using a feature which needs you to grant that particular app some permission.
Web Experience: Custom Chrome Tabs
The web browsing experience with the Chrome browser also gets a shot in the arm. Chrome Custom tabs, a new feature, that will let you include webviews within a particular app, without the need to switch to the Chrome browser on your phone. The Chrome browser will run atop your app (in case you click on any link within the app). Features such as automatic sign-in, saved passwords, autofill will work on the apps seamlessly. Also the Chrome Custom tab will take up the colours and fonts of the app it is being opened within, to make it seem like a seamless experience. In principle it seems closer to Facebook’s Instant Articles, with the difference being that the Chrome Custom tabs will make you feel like you are within the app that you are browsing.
App Linking
Android currently supports the app linking system, also known as Intents, which gives you the choice to open a particular web link in a web browser or an app. Before, if you had a Twitter link in say your inbox and you clicked on it, you got a prompt asking if you want to open the link in your browser or within the Twitter app, which is installed on your phone.
Android M will let developers add in an auto-verify feature within their code, which will help open the link within the respective app (provided the app is installed on your phone). In the background, the Android M OS will verify the link with the app’s server and post-authentication will proceed to open the link within the app itself, without asking you where you want to open the link.
Android Pay
This feature will let you make your payments using near-field communication (NFC) and host card emulation techniques for tap-to-pay services. You just need to unlock your phone, keep it near an NFC terminal and your payment is done, without opening any app. Google says when you add in your card details, a virtual account number is created to make your payments. Your actual card number is not shared with the store during the transaction.
According to Google, Android Pay will be pre-installed on AT&T, Verizon and T-Mobile devices and will be accepted in around 700,000 stores in the US which accept contact-less payment. Android Pay will replace the Google Wallet app. Android Pay can also be used to make in-app payments provided developers integrate Pay into their apps.
Fingerprint Support
Android M will standardise the fingerprint sensor support and it is working with various phones to make a standard API to go with their sensors. You can use your fingerprint to authorise an Android Pay transaction, unlock your device or make Play Store purchases.
Power management
Android M will feature a smart power-managing feature called Doze. This feature works by letting system optimally manage the background processes. The OS keeps a tab on the motion detection sensor and if there is no activity for a long time, the system shuts down some processes. Since it is in the Doze stage, the system can still get activated by alarms and high priority notifications. According to Google, this feature has helped increase the standby-time on the Nexus 9 by almost two times over the Android 5.0 Lollipop.
Android M will also support USB Type-C for charging. And considering USB Type-C is has a bi-directional port, you can use this port to either charge the phone as welll as charge another device.
Apart from these main features, some of the other improvements include a better implementation of Copy/Paste function. So in Android M, you will get a floating toolbar just above your selection with the Cut, Copy, Paste options. Direct Share feature will let you share images or links with your most frequently shared contacts or apps, using a single click. Volume controls will also give you a drop-down menu, a feature that is common on the Cyanogen OS.
Marshmallow was one of the highly speculated name that fits Google’s nomenclature of sweet treats like – Cup Cake, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, KitKat and Lollipop. It beat other probable names like mud pie, mousse, and our very favourite Malai Barfi.
The company revealed the name on its developers blog and alongside also revealed the final Android 6.0 SDK that will be available for download via the SDK Manager in Android Studio. It will bring access to final Android APIs and latest build tools.
“Today with the final Developer Preview update, we’re introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow,” Jamal Eason, Product Manager, Android writes in a blogpost.
Marshmallow brings new platform features such as fingerprint scanner and Doze power saving mode, but along with that also offers new permissions model.
Google Play is also made ready to accept API 23 apps via the Google Play Developer Console. At the consumer launch later this year, the Google Play store will be updated so that the app install and update process supports the new permissions model for apps using API 23.
“Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse — processing Frame objects to return a SparseArray<Barcode> types,” he further adds.
Google has also revaled its new lawn statue similar to the droid seen above.
Needless to say, Android Marshmallow brings new app permissions, custom Chrome Tabs, fingerprint support and improved power management.
Take a look at some of its cool new features announced earlier this year:
App Permissions
The App Permissions got a major overhaul and Google will allow users to decide which permissions they want to allow or revoke, based on when those particular functions are used. Unlike the current implementation, where users have to agree to all app permissions on first install and also for updates, in Android M, users will get notifications asking for permissions only when they are using a particular function in an app.
Google has identified eight parameters including location, camera, contacts and so on, to help you give permissions for these. So for instance, in WhatsApp if you want to send a voice message, the App Permissions tab will pop up, asking you for permission to use the microphone. You can also revoke the permission later if you so wish. Also, app updates will also not ask you for permissions off the bat, unless you are using a feature which needs you to grant that particular app some permission.
Web Experience: Custom Chrome Tabs
The web browsing experience with the Chrome browser also gets a shot in the arm. Chrome Custom tabs, a new feature, that will let you include webviews within a particular app, without the need to switch to the Chrome browser on your phone. The Chrome browser will run atop your app (in case you click on any link within the app). Features such as automatic sign-in, saved passwords, autofill will work on the apps seamlessly. Also the Chrome Custom tab will take up the colours and fonts of the app it is being opened within, to make it seem like a seamless experience. In principle it seems closer to Facebook’s Instant Articles, with the difference being that the Chrome Custom tabs will make you feel like you are within the app that you are browsing.
App Linking
Android currently supports the app linking system, also known as Intents, which gives you the choice to open a particular web link in a web browser or an app. Before, if you had a Twitter link in say your inbox and you clicked on it, you got a prompt asking if you want to open the link in your browser or within the Twitter app, which is installed on your phone.
Android M will let developers add in an auto-verify feature within their code, which will help open the link within the respective app (provided the app is installed on your phone). In the background, the Android M OS will verify the link with the app’s server and post-authentication will proceed to open the link within the app itself, without asking you where you want to open the link.
Android Pay
This feature will let you make your payments using near-field communication (NFC) and host card emulation techniques for tap-to-pay services. You just need to unlock your phone, keep it near an NFC terminal and your payment is done, without opening any app. Google says when you add in your card details, a virtual account number is created to make your payments. Your actual card number is not shared with the store during the transaction.
According to Google, Android Pay will be pre-installed on AT&T, Verizon and T-Mobile devices and will be accepted in around 700,000 stores in the US which accept contact-less payment. Android Pay will replace the Google Wallet app. Android Pay can also be used to make in-app payments provided developers integrate Pay into their apps.
Fingerprint Support
Android M will standardise the fingerprint sensor support and it is working with various phones to make a standard API to go with their sensors. You can use your fingerprint to authorise an Android Pay transaction, unlock your device or make Play Store purchases.
Power management
Android M will feature a smart power-managing feature called Doze. This feature works by letting system optimally manage the background processes. The OS keeps a tab on the motion detection sensor and if there is no activity for a long time, the system shuts down some processes. Since it is in the Doze stage, the system can still get activated by alarms and high priority notifications. According to Google, this feature has helped increase the standby-time on the Nexus 9 by almost two times over the Android 5.0 Lollipop.
Android M will also support USB Type-C for charging. And considering USB Type-C is has a bi-directional port, you can use this port to either charge the phone as welll as charge another device.
Apart from these main features, some of the other improvements include a better implementation of Copy/Paste function. So in Android M, you will get a floating toolbar just above your selection with the Cut, Copy, Paste options. Direct Share feature will let you share images or links with your most frequently shared contacts or apps, using a single click. Volume controls will also give you a drop-down menu, a feature that is common on the Cyanogen OS.
Android API Differences Between 22 and 23
This report details the changes in the core Android framework API between two API Level
specifications. It shows additions, modifications, and removals for packages, classes, methods, and fields.
The report also includes general statistics that characterize the extent and type of the differences.
This report is based a comparison of the Android API specifications whose API Level identifiers are given in the upper-right corner of this page. It compares a newer "to" API to an older "from" API, noting all changes relative to the older API. So, for example, API elements marked as removed are no longer present in the "to" API specification.
To navigate the report, use the "Select a Diffs Index" and "Filter the Index" controls on the left. The report uses text formatting to indicate interface names,
For more information about the Android framework API and SDK, see the Android Developers site.
This report is based a comparison of the Android API specifications whose API Level identifiers are given in the upper-right corner of this page. It compares a newer "to" API to an older "from" API, noting all changes relative to the older API. So, for example, API elements marked as removed are no longer present in the "to" API specification.
To navigate the report, use the "Select a Diffs Index" and "Filter the Index" controls on the left. The report uses text formatting to indicate interface names,
links to reference documentation, and links to change
description. The statistics are accessible from the "Statistics" link in the upper-right corner.For more information about the Android framework API and SDK, see the Android Developers site.
| Added Packages | |
|---|---|
android.app.assist |
|
android.hardware.fingerprint |
|
android.media.midi |
|
android.security.keystore |
|
android.service.chooser |
|
New features in Android 6.0 API (API level 23)
The M Developer Preview gives you an advance look at the upcoming release
for the Android platform, which offers new features for users and app
developers. This document provides an introduction to the most notable APIs.
The M Developer Preview 3 release includes the final APIs for Android 6.0 (API level 23). If you are preparing an app for use on Android 6.0, download the latest SDK and to complete your final updates and release testing. You can review the final APIs in the API Reference and see the API differences in the Android API Differences Report.
Please see Behavior Changes for complete information.
This preview enhances Android’s intent system by providing more powerful app linking. This feature allows you to associate an app with a web domain you own. Based on this association, the platform can determine the default app to use to handle a particular web link and skip prompting users to select an app. To learn how to implement this feature, see App Linking.
The system now performs automatic full data backup and restore for apps. For the duration of the M Developer Preview program, all apps are backed up, independent of which SDK version they target. After the final M SDK release, your app must target M to enable this behavior; you do not need to add any additional code. If users delete their Google accounts, their backup data is deleted as well. To learn how this feature works and how to configure what to back up on the file system, see Auto Backup for Apps.
This preview offers new APIs to let you authenticate users by using their fingerprint scans on supported devices, and check how recently the user was last authenticated using a device unlocking mechanism (such as a lockscreen password). Use these APIs in conjunction with the Android Keystore system.
To use this feature in your app, first add the
To see an app implementation of fingerprint authentication, refer to the
Fingerprint Dialog sample. For a demonstration of how you can use these authentication
APIs in conjunction with other Android APIs, see the video
Fingerprint and Payment APIs.
If you are testing this feature, follow these steps:
To set the timeout duration for which the same key can be re-used after a user is successfully authenticated, call the new
Avoid showing the re-authentication dialog excessively -- your apps should try using the cryptographic object first and if the the timeout expires, use the
To see an app implementation of this feature, refer to the Confirm Credential sample.
This preview provides you with APIs to make sharing intuitive and quick for users. You can now
define direct share targets that launch a specific activity in your app. These direct share
targets are exposed to users via the Share menu. This feature allows users to share
content to targets, such as contacts, within other apps. For example, the direct share target might
launch an activity in another social network app, which lets the user share content directly to a
specific friend or community in that app.
To enable direct share targets you must define a class that extends the
The following example shows how you might declare the
This preview provides a new voice interaction API which, together with Voice Actions, allows you to build conversational voice experiences into your apps. Call the
Most voice interactions originate from a user voice action. A voice interaction activity can also, however, start without user input. For example, another app launched through a voice interaction can also send an intent to launch a voice interaction. To determine if your activity launched from a user voice query or from another voice interaction app, call the
To learn more about implementing voice actions, see the Voice Actions developer site.
This preview offers a new way for users to engage with your apps through an assistant. To use this feature, the user must enable the assistant to use the current context. Once enabled, the user can summon the assistant within any app, by long-pressing on the Home button.
Your app can elect to not share the current context with the assistant by setting the
To provide the assistant with additional context from your app, follow these steps:
This preview adds the following API changes for notifications:
This preview provides improved support for user input using a Bluetooth stylus. Users can pair and connect a compatible Bluetooth stylus with their phone or tablet. While connected, position information from the touch screen is fused with pressure and button information from the stylus to provide a greater range of expression than with the touch screen alone. Your app can listen for stylus button presses and perform secondary actions, by registering
Use the
If your app performs performs Bluetooth Low Energy scans, use the new
This preview adds support for the Hotspot 2.0 Release 1 spec on Nexus 6 and Nexus 9 devices. To provision Hotspot 2.0 credentials in your app, use the new methods of the
The platform now allows apps to request that the display resolution be upgraded to 4K rendering on compatible hardware. To query the current physical resolution, use the new
You can request the system to change the physical resolution in your app as it runs, by setting the
Theme attributes are now supported in
This preview adds enhancements to audio processing on Android, including:
This preview adds new capabilities to the video processing APIs, including:
This preview includes the following new APIs for accessing the camera’s flashlight and for camera reprocessing of images:
You can register a callback to be notified about torch mode status by calling the
Use the
The
This preview includes the following new APIs for Android for Work:
The M Developer Preview 3 release includes the final APIs for Android 6.0 (API level 23). If you are preparing an app for use on Android 6.0, download the latest SDK and to complete your final updates and release testing. You can review the final APIs in the API Reference and see the API differences in the Android API Differences Report.
Important:
You may now publish apps that target Android 6.0 (API level 23) to the Google Play store.
Note:
If you have been working with previous preview releases and want to see the differences
between the final API and previous preview versions, download the additional difference
reports included in the preview docs
reference.
Important behavior changes
If you have previously published an app for Android, be aware that your app might be affected by changes in the platform.Please see Behavior Changes for complete information.
App Linking
This preview enhances Android’s intent system by providing more powerful app linking. This feature allows you to associate an app with a web domain you own. Based on this association, the platform can determine the default app to use to handle a particular web link and skip prompting users to select an app. To learn how to implement this feature, see App Linking.
Auto Backup for Apps
The system now performs automatic full data backup and restore for apps. For the duration of the M Developer Preview program, all apps are backed up, independent of which SDK version they target. After the final M SDK release, your app must target M to enable this behavior; you do not need to add any additional code. If users delete their Google accounts, their backup data is deleted as well. To learn how this feature works and how to configure what to back up on the file system, see Auto Backup for Apps.
Authentication
This preview offers new APIs to let you authenticate users by using their fingerprint scans on supported devices, and check how recently the user was last authenticated using a device unlocking mechanism (such as a lockscreen password). Use these APIs in conjunction with the Android Keystore system.
Fingerprint Authentication
To authenticate users via fingerprint scan, get an instance of the newFingerprintManager class and call the
authenticate()
method. Your app must be running on a compatible
device with a fingerprint sensor. You must implement the user interface for the fingerprint
authentication flow on your app, and use the standard Android fingerprint icon in your UI.
The Android fingerprint icon (c_fp_40px.png) is included in the
sample app. If you are developing multiple apps that use fingerprint
authentication, note that each app must authenticate the user’s fingerprint independently.
To use this feature in your app, first add the
USE_FINGERPRINT permission in your manifest.<uses-permission android:name="android.permission.USE_FINGERPRINT" />
To see an app implementation of fingerprint authentication, refer to the
Fingerprint Dialog sample. For a demonstration of how you can use these authentication
APIs in conjunction with other Android APIs, see the video
Fingerprint and Payment APIs.If you are testing this feature, follow these steps:
- Install Android SDK Tools Revision 24.3, if you have not done so.
- Enroll a new fingerprint in the emulator by going to Settings > Security > Fingerprint, then follow the enrollment instructions.
- Use an emulator to emulate fingerprint touch events with the
following command. Use the same command to emulate fingerprint touch events on the lockscreen or
in your app.
adb -e emu finger touch <finger_id>
On Windows, you may have to runtelnet 127.0.0.1 <emulator-id>followed byfinger touch <finger_id>.
Confirm Credential
Your app can authenticate users based on how recently they last unlocked their device. This feature frees users from having to remember additional app-specific passwords, and avoids the need for you to implement your own authentication user interface. Your app should use this feature in conjunction with a public or secret key implementation for user authentication.To set the timeout duration for which the same key can be re-used after a user is successfully authenticated, call the new
setUserAuthenticationValidityDurationSeconds()
method when you set up a KeyGenerator or
KeyPairGenerator.Avoid showing the re-authentication dialog excessively -- your apps should try using the cryptographic object first and if the the timeout expires, use the
createConfirmDeviceCredentialIntent()
method to re-authenticate the user within your app.
To see an app implementation of this feature, refer to the Confirm Credential sample.
Direct Share
This preview provides you with APIs to make sharing intuitive and quick for users. You can now
define direct share targets that launch a specific activity in your app. These direct share
targets are exposed to users via the Share menu. This feature allows users to share
content to targets, such as contacts, within other apps. For example, the direct share target might
launch an activity in another social network app, which lets the user share content directly to a
specific friend or community in that app.To enable direct share targets you must define a class that extends the
ChooserTargetService class. Declare your
service in the manifest. Within that declaration, specify the
BIND_CHOOSER_TARGET_SERVICE permission and an
intent filter using the
SERVICE_INTERFACE action.The following example shows how you might declare the
ChooserTargetService in your manifest.<service android:name=".ChooserTargetService" android:label="@string/service_name" android:permission="android.permission.BIND_CHOOSER_TARGET_SERVICE"> <intent-filter> <action android:name="android.service.chooser.ChooserTargetService" /> </intent-filter> </service>For each activity that you want to expose to
ChooserTargetService, add a
<meta-data> element with the name
"android.service.chooser.chooser_target_service" in your app manifest.
<activity android:name=".MyShareActivity” android:label="@string/share_activity_label"> <intent-filter> <action android:name="android.intent.action.SEND" /> </intent-filter> <meta-data android:name="android.service.chooser.chooser_target_service" android:value=".ChooserTargetService" /> </activity>
Voice Interactions
This preview provides a new voice interaction API which, together with Voice Actions, allows you to build conversational voice experiences into your apps. Call the
isVoiceInteraction() method to determine if a voice action triggered
your activity. If so, your app can use the
VoiceInteractor class to request a voice confirmation from the user, select
from a list of options, and more.Most voice interactions originate from a user voice action. A voice interaction activity can also, however, start without user input. For example, another app launched through a voice interaction can also send an intent to launch a voice interaction. To determine if your activity launched from a user voice query or from another voice interaction app, call the
isVoiceInteractionRoot() method. If another app launched your
activity, the method returns false. Your app may then prompt the user to confirm that
they intended this action.To learn more about implementing voice actions, see the Voice Actions developer site.
Assist API
This preview offers a new way for users to engage with your apps through an assistant. To use this feature, the user must enable the assistant to use the current context. Once enabled, the user can summon the assistant within any app, by long-pressing on the Home button.
Your app can elect to not share the current context with the assistant by setting the
FLAG_SECURE flag. In addition to the
standard set of information that the platform passes to the assistant, your app can share
additional information by using the new AssistContent class.To provide the assistant with additional context from your app, follow these steps:
- Implement the
Application.OnProvideAssistDataListenerinterface. - Register this listener by using
registerOnProvideAssistDataListener(). - In order to provide activity-specific contextual information, override the
onProvideAssistData()callback and, optionally, the newonProvideAssistContent()callback.
Notifications
This preview adds the following API changes for notifications:
- New
INTERRUPTION_FILTER_ALARMSfilter level that corresponds to the new Alarms only do not disturb mode. - New
CATEGORY_REMINDERcategory value that is used to distinguish user-scheduled reminders from other events (CATEGORY_EVENT) and alarms (CATEGORY_ALARM). - New
Iconclass that you can attach to your notifications via thesetSmallIcon()andsetLargeIcon()methods. Similarly, theaddAction()method now accepts anIconobject instead of a drawable resource ID. - New
getActiveNotifications()method that allows your apps to find out which of their notifications are currently alive. To see an app implementation that uses this feature, see the Active Notifications sample.
Bluetooth Stylus Support
This preview provides improved support for user input using a Bluetooth stylus. Users can pair and connect a compatible Bluetooth stylus with their phone or tablet. While connected, position information from the touch screen is fused with pressure and button information from the stylus to provide a greater range of expression than with the touch screen alone. Your app can listen for stylus button presses and perform secondary actions, by registering
View.OnContextClickListener and
GestureDetector.OnContextClickListener objects in your activity.Use the
MotionEvent methods and constants to detect stylus button
interactions:- If the user touches a stylus with a button on the screen of your app, the
getTooltype()method returnsTOOL_TYPE_STYLUS. - For apps targeting M Preview, the
getButtonState()method returnsBUTTON_STYLUS_PRIMARYwhen the user presses the primary stylus button. If the stylus has a second button, the same method returnsBUTTON_STYLUS_SECONDARYwhen the user presses it. If the user presses both buttons simultaneously, the method returns both values OR'ed together (BUTTON_STYLUS_PRIMARY|BUTTON_STYLUS_SECONDARY). -
For apps targeting a lower platform version, the
getButtonState()method returnsBUTTON_SECONDARY(for primary stylus button press),BUTTON_TERTIARY(for secondary stylus button press), or both.
Improved Bluetooth Low Energy Scanning
If your app performs performs Bluetooth Low Energy scans, use the new
setCallbackType()
method to specify that you want the system to notify callbacks when it first finds, or sees after a
long time, an advertisement packet matching the set ScanFilter. This
approach to scanning is more power-efficient than what’s provided in the previous platform version.
Hotspot 2.0 Release 1 Support
This preview adds support for the Hotspot 2.0 Release 1 spec on Nexus 6 and Nexus 9 devices. To provision Hotspot 2.0 credentials in your app, use the new methods of the
WifiEnterpriseConfig class, such as
setPlmn() and
setRealm(). In the
WifiConfiguration object, you can set the
FQDN and the
providerFriendlyName fields.
The new isPasspointNetwork() method indicates if a detected
network represents a Hotspot 2.0 access point.
4K Display Mode
The platform now allows apps to request that the display resolution be upgraded to 4K rendering on compatible hardware. To query the current physical resolution, use the new
Display.Mode APIs. If the UI is drawn at a lower logical resolution and is
upscaled to a larger physical resolution, be aware that the physical resolution the
getPhysicalWidth() method returns may differ from the logical
resolution reported by getSize().You can request the system to change the physical resolution in your app as it runs, by setting the
preferredDisplayModeId property of your app’s
window. This feature is useful if you want to switch to 4K display resolution. While in 4K display
mode, the UI continues to be rendered at the original resolution (such as 1080p) and is upscaled to
4K, but SurfaceView objects may show content at the native resolution.Themeable ColorStateLists
Theme attributes are now supported in
ColorStateList for devices running the M Preview. The
getColorStateList() and
getColor() methods have been deprecated. If
you are calling these APIs, call the new
getColorStateList() or
getColor() methods instead. These methods are also
available in the v4 appcompat library via ContextCompat.Audio Features
This preview adds enhancements to audio processing on Android, including:
- Support for the MIDI
protocol, with the new
android.media.midiAPIs. Use these APIs to send and receive MIDI events. - New
AudioRecord.BuilderandAudioTrack.Builderclasses to create digital audio capture and playback objects respectively, and configure audio source and sink properties to override the system defaults. - API hooks for associating audio and input devices. This is particularly useful if your app
allows users to start a voice search from a game controller or remote control connected to Android
TV. The system invokes the new
onSearchRequested()callback when the user starts a search. To determine if the user's input device has a built-in microphone, retrieve theInputDeviceobject from that callback, then call the newhasMicrophone()method. - New
getDevices()method which lets you retrieve a list of all audio devices currently connected to the system. You can also register anAudioDeviceCallbackobject if you want the system to notify your app when an audio device connects or disconnects.
Video Features
This preview adds new capabilities to the video processing APIs, including:
- New
MediaSyncclass which helps applications to synchronously render audio and video streams. The audio buffers are submitted in non-blocking fashion and are returned via a callback. It also supports dynamic playback rate. - New
EVENT_SESSION_RECLAIMEDevent, which indicates that a session opened by the app has been reclaimed by the resource manager. If your app uses DRM sessions, you should handle this event and make sure not to use a reclaimed session. - New
ERROR_RECLAIMEDerror code, which indicates that the resource manager reclaimed the media resource used by the codec. With this exception, the codec must be released, as it has moved to terminal state. - New
getMaxSupportedInstances()interface to get a hint for the max number of the supported concurrent codec instances. - New
setPlaybackParams()method to set the media playback rate for fast or slow motion playback. It also stretches or speeds up the audio playback automatically in conjunction with the video.
Camera Features
This preview includes the following new APIs for accessing the camera’s flashlight and for camera reprocessing of images:
Flashlight API
If a camera device has a flash unit, you can call thesetTorchMode()
method to switch the flash unit’s torch mode on or off without opening the camera device. The app
does not have exclusive ownership of the flash unit or the camera device. The torch mode is turned
off and becomes unavailable whenever the camera device becomes unavailable, or when other camera
resources keeping the torch on become unavailable. Other apps can also call
setTorchMode()
to turn off the torch mode. When the last app that turned on the torch mode is closed, the torch
mode is turned off.You can register a callback to be notified about torch mode status by calling the
registerTorchCallback()
method. The first time the callback is registered, it is immediately called with the torch mode
status of all currently known camera devices with a flash unit. If the torch mode is turned on or
off successfully, the
onTorchModeChanged()
method is invoked.Reprocessing API
TheCamera2 API is extended to support YUV and private
opaque format image reprocessing. To determine if these reprocessing capabilities are available,
call getCameraCharacteristics() and check for the
REPROCESS_MAX_CAPTURE_STALL key. If a
device supports reprocessing, you can create a reprocessable camera capture session by calling
createReprocessableCaptureSession(),
and create requests for input buffer reprocessing.Use the
ImageWriter class to connect the input buffer flow to the camera
reprocessing input. To get an empty buffer, follow this programming model:- Call the
dequeueInputImage()method. - Fill the data into the input buffer.
- Send the buffer to the camera by calling the
queueInputImage()method.
ImageWriter object together with an
PRIVATE image, your app cannot access the image
data directly. Instead, pass the PRIVATE image directly to the
ImageWriter by calling the
queueInputImage() method
without any buffer copy.The
ImageReader class now supports
PRIVATE format image streams. This support allows your app to
maintain a circular image queue of ImageReader output images, select one or
more images, and send them to the ImageWriter for camera reprocessing.Android for Work Features
This preview includes the following new APIs for Android for Work:
- Enhanced controls for Corporate-Owned, Single-Use devices: The Device Owner
can now control the following settings to improve management of
Corporate-Owned, Single-Use (COSU) devices:
- Disable or re-enable the keyguard with the
setKeyguardDisabled()method. - Disable or re-enable the status bar (including quick settings, notifications, and the
navigation swipe-up gesture that launches Google Now) with the
setStatusBarDisabled()method. - Disable or re-enable safe boot with the
UserManagerconstantDISALLOW_SAFE_BOOT. - Prevent the screen from turning off while plugged in with the
STAY_ON_WHILE_PLUGGED_INconstant.
- Disable or re-enable the keyguard with the
- Silent install and uninstall of apps by Device Owner: A Device Owner can now
silently install and uninstall applications using the
PackageInstallerAPIs, independent of Google Play for Work. You can now provision devices through a Device Owner that fetches and installs apps without user interaction. This feature is useful for enabling one-touch provisioning of kiosks or other such devices without activating a Google account. - Silent enterprise certificate access: When an app calls
choosePrivateKeyAlias(), prior to the user being prompted to select a certificate, the Profile or Device Owner can now call theonChoosePrivateKeyAlias()method to provide the alias silently to the requesting application. This feature lets you grant managed apps access to certificates without user interaction. - Auto-acceptance of system updates. By setting a system update policy with
setSystemUpdatePolicy(), a Device Owner can now auto-accept a system update, for instance in the case of a kiosk device, or postpone the update and prevent it being taken by the user for up to 30 days. Furthermore, an administrator can set a daily time window in which an update must be taken, for example during the hours when a kiosk device is not in use. When a system update is available, the system checks if the Work Policy Controller app has set a system update policy, and behaves accordingly. -
Delegated certificate installation: A Profile or Device Owner can now grant a
third-party app the ability to call these
DevicePolicyManagercertificate management APIs: - Data usage tracking. A Profile or Device Owner can now query for the
data usage statistics visible in Settings > Data usage by using the new
NetworkStatsManagermethods. Profile Owners are automatically granted permission to query data on the profile they manage, while Device Owners get access to usage data of the managed primary user. - Runtime permission management:
A Profile or Device Owner can set a permission policy
for all runtime requests of all applications using
setPermissionPolicy(), to either prompt the user to grant the permission or automatically grant or deny the permission silently. If the latter policy is set, the user cannot modify the selection made by the Profile or Device Owner within the app’s permissions screen in Settings. - VPN in Settings: VPN apps are now visible in Settings > More > VPN. Additionally, the notifications that accompany VPN usage are now specific to how that VPN is configured. For Profile Owner, the notifications are specific to whether the VPN is configured for a managed profile, a personal profile, or both. For a Device Owner, the notifications are specific to whether the VPN is configured for the entire device.
- Work status notification: A status bar briefcase icon now appears whenever an app from the managed profile has an activity in the foreground. Furthermore, if the device is unlocked directly to the activity of an app in the managed profile, a toast is displayed notifying the user that they are within the work profile.
For a detailed view of all API changes in the M Developer Preview, see the API Differences Report.
Subscribe to:
Comments (Atom)