原地址:https://developer.android.com/preview/features.html#rtt
中文地址:https://mp.weixin.qq.com/s/9kASHj-L1f-Cj0JCKYouFw
Android P introduces great new features and capabilities for users and developers. This document highlights what's new for developers.
To learn about the new APIs, read the API diff report or visit the the Android API reference — new APIs are highlighted to make them easy to see. Also be sure to check out Android P Behavior Changes to learn about areas where platform changes may affect your apps.
Indoor Positioning with Wi-Fi RTT
Android P adds platform support for the IEEE 802.11mc Wi-Fi protocol—also known as Wi-Fi Round-Trip-Time (RTT)—to let you take advantage of indoor positioning in your apps.
On Android P devices with hardware support, your apps can use the new RTT APIs to measure the distance to nearby RTT-capable Wi-Fi Access Points (APs). The device must have location enabled and Wi-Fi scanning turned on (under Settings > Location), and your app must have the ACCESS_FINE_LOCATION
permission. The device doesn't need to connect to the APs to use RTT. To maintain privacy, only the phone is able to determine the distance to the AP; the APs do not have this information.
If your device measures the distance to 3 or more APs, you can use a multilateration algorithm to estimate the device position that best fits those measurements. The result is typically accurate within 1 to 2 meters.
With this accuracy, you can build new experiences like in-building navigation, fine- grained location-based services such as disambiguated voice control (for example, "Turn on this light"), and location-based information (such as "Are there special offers for this product?").
Display cutout support
Android P offers support for the latest edge-to-edge screens with display cutout for camera and speaker. The new DisplayCutout
class lets you find out the location and shape of the non-functional areas where content shouldn't be displayed. To determine the existence and placement of these cutout areas, use the getDisplayCutout()
method.
A new window layout attribute, layoutInDisplayCutoutMode
, allows your app to lay out its content around a device's cutouts. You can set this attribute to one of the following values:
LAYOUT_IN_DISPLAY_CUTOUT_MODE_DEFAULT
LAYOUT_IN_DISPLAY_CUTOUT_MODE_ALWAYS
LAYOUT_IN_DISPLAY_CUTOUT_MODE_NEVER
You can simulate a screen cutout on any device or emulator running Android P as follows:
- Enable developer options.
- In the Developer options screen, scroll down to the Drawing section and select Simulate a display with a cutout.
- Select the size of the cutout.
Notifications
Android P introduces several enhancements to notifications, all of which are available to developers targeting Android P and above.
Enhanced messaging experience
Starting in Android 7.0 (API level 24), you can add an action to reply to messages or enter other text directly from a notification. Android P enhances this feature with the following enhancements:
Support for images: Android P now displays images in Messaging Notifications on phones. You can use
setData()
on the message to display an image.Simplified support for conversation participants: The new
Notification.Person
class is used to identify people involved in a conversation, including their avatars and URIs. Many other APIs, such asaddMessage()
, now leverage thePerson
class instead of aCharSequence
Save replies as drafts: Your app can retrieve the
EXTRA_REMOTE_INPUT_DRAFT
sent by the system when a user inadvertently closes a messaging notification. You can use this extra to pre-populate text fields in the app so users can finish their reply.Identify if a conversation is a group conversation: You can use
setGroupConversation()
to purposefully identify a conversation as a group or non-group conversation.Set the semantic action for an intent: The
setSemanticAction()
method allows you to give semantic meaning to an action, such as mark as read, delete, reply, and so on.SmartReply: Android P supports the same suggested replies available in your messaging app. Use
RemoteInput.setChoices()
to provide an array of standard responses to the user.
Channel settings, broadcasts, and Do Not Disturb
Android O introduced Notification Channels allowing you to create a user-customizable channel for each type of notification you want to display. Android P simplifies notification channel settings with these changes:
- Blocking channel groups: Users can now block entire groups of channels within the notification settings for an app. You can use the
isBlocked()
method to identify when a group is blocked and, as a result, not send any notifications for channels in that group.
Additionally, your app can query for current channel group settings using the newgetNotificationChannelGroup()
method. - New broadcast intent types: The Android system now sends broadcast intents when the blocking state of notification channels and channel groups’ changes. The app that owns the blocked channel or group can listen for these intents and react accordingly. For further information on these intent actions and extras, refer to the updated constants list in the
NotificationManager
reference. For information on reacting to broadcast intents, refer toBroadcasts. - New Do Not Disturb priority categories:
NotificationManager.Policy
has two new policy constants:PRIORITY_CATEGORY_ALARMS
(alarms are prioritized) andPRIORITY_CATEGORY_MEDIA_SYSTEM_OTHER
(media, system, and game sounds are prioritized).
Multi-camera support and camera updates
You can now access streams simultaneously from two or more physical cameras on devices running Android P. On devices with either dual-front or dual-back cameras, you can create innovative features not possible with just a single camera, such as seamless zoom, bokeh, and stereo vision. The API also lets you call a logical or fused camera stream that automatically switches between two or more cameras.
Other improvements in camera include new Session parameters that help to reduce delays during initial capture, and Surface sharing that lets camera clients handle various use-cases without the need to stop and start camera streaming. We’ve also added APIs for display-based flash support and access to OIS timestamps for app-level image stabilization and special effects.
Android P also enables support for external USB/UVC cameras on supported deveices.
ImageDecoder for bitmaps and drawables
Android P introduces ImageDecoder
to provide a modernized approach for decoding images. You should use ImageDecoder
to decode an image rather than the BitmapFactory
and BitmapFactory.Options
APIs.
ImageDecoder
lets you create a Drawable
or a Bitmap
from a byte buffer, a file, or a URI. To decode an image, first call createSource()
with the source of the encoded image. Then, call decodeBitmap()
or decodeDrawable()
by passing the ImageDecoder.Source
object to create a Bitmap
or a Drawable
. To change default settings, pass OnHeaderDecodedListener
to decodeBitmap()
or decodeDrawable()
. ImageDecoder
calls onHeaderDecoded()
with the image's default width and height, once they are known. If the encoded image is an animated GIF or WebP, decodeDrawable()
returns a Drawable
that is an instance of the AnimatedImageDrawable
class.
There are different methods you can use to set image properties. These include:
- To scale the decoded image to an exact size, call
setResize()
with the target dimensions. You can also scale images using a sample size. Pass the sample size directly tosetResize()
, or callgetSampledSize()
to find out whatImageDecoder
can sample most efficiently. - To crop an image within the range of the scaled image, call setCrop().
- To create a mutable
Bitmap
, callsetMutable(true)
.
ImageDecoder
also lets you add customized and complicated effects to an image such as rounded corners or circle masks. Use setPostProcessor()
with an instance of the PostProcessor
class to execute whatever drawing commands you want. When you post-process an AnimatedImageDrawable
, effects are applied to all frames.
Animation
Android P introduces a new AnimatedImageDrawable
class for drawing and displaying GIF and WebP animated images. AnimatedImageDrawable
works similarly to AnimatedVectorDrawable in that RenderThread drives the animations of AnimatedImageDrawable
. RenderThread also uses a worker thread to decode, so that decoding does not interfere with RenderThread. This implementation allows your app to have an animated image without managing its updates or interfering with your app's UI thread.
An AnimagedImageDrawable
can be decoded with the new ImageDecoder. The following code snippet shows how to use ImageDecoder
to decode yourAnimatedImageDrawable
:
Drawable d = ImageDecoder.decodeDrawable(...);
if (d instanceof AnimatedImageDrawable)
{
((AnimatedImageDrawable) d).start();
// Prior to start(), the first frame is displayed
}
ImageDecoder
has several methods allowing you to further modify the image. For example, you can use the setPostProcessor() method to modify the appearance of the image, such as applying a circle mask or rounding corners.
HDR VP9 Video, HEIF image compression, and Media APIs
Android P adds built-in support for High Dynamic Range (HDR) VP9 Profile 2, so you can now deliver HDR-enabled movies to your users from YouTube, Play Movies, and other sources on HDR-capable devices.
Android P adds support for HEIF (heic) images encoding to the platform. HEIF still image samples are supported in the MediaMuxer
and MediaExtractor
classes HEIF improves compression to save on storage and network data. With platform support on Android P devices, it’s easy to send and utilize HEIF images from your backend server. Once you’ve made sure that your app is compatible with this data format for sharing and display, give HEIF a try as an image storage format in your app. You can do a jpeg-to-heic conversion using ImageDecoder or BitmapFactory to obtain a bitmap from jpeg, and you can use HeifWriter in the new Support Library alpha to write HEIF still images from YUV byte buffer, Surface, or Bitmap.
Android P also introduces MediaPlayer2
. This player supports playlists that are built using DataSourceDesc
. To create an instance of MediaPlayer2
use MediaPlayer2.create()
.
Media metrics are now also available from the AudioTrack
, AudioRecord
, and MediaDrm
classes.
Android P adds new methods to the MediaDRM
class to get metrics, HDCP levels, security levels and number of sessions, and to add more control over security levels and secure stops. See the API Diff report for details.
Data cost sensitivity in JobScheduler
With Android P, JobScheduler
has been improved to let it better handle network-related jobs for the user, in coordination with network status signals provided separately by carriers.
Jobs can now declare their estimated data size, signal prefetching, and specify detailed network requirements—carriers can report networks as being congested or unmetered. JobScheduler
then manages work according to the network status. For example, when a network is congested, JobScheduler
might defer large network requests. When on an unmetered network, JobScheduler
can run prefetch jobs to improve the user experience, such as by prefetching headlines.
When adding jobs, make sure to use setEstimatedNetworkBytes()
, setIsPrefetch()
, and setRequiredNetwork()
when appropriate to helpJobScheduler
handle the work properly. When your job executes, be sure to use the Network
object returned by JobParameters.getNetwork()
. Otherwise you'll implicitly use the device’s default network which may not meet your requirements, causing unintended data usage.
Neural Networks API 1.1
The Neural Networks API was introduced in Android 8.1 (API level 27)to accelerate on-device machine learning on Android. Android P expands and improving the API, adding support for nine new ops — Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze.
Autofill framework
Android 8.0 (API level 26) introduced the autofill framework, which makes it easier to fill out forms in apps. Android P introduces multiple improvements that autofill services can implement to further enhance the user experience when filling out forms. For more details, see the Autofill Framework page.
Security enhancements
Android P introduces a number of new security features, including a unified fingerprint authentication dialog and high-assurance user confirmation of sensitive transactions. For more details, see the Security Updates page.
Client-side encryption of Android backups
Android P enables encryption of Android backups with a client-side secret. Because of this privacy measure, the device's PIN, pattern, or password is required to restore data from the backups made by the user's device. To learn more about the technology behind this new feature, see the Google Cloud Key Vault Service whitepaper.
To learn more about backing up data on Android devices, see Data Backup Overview.
Accessibility
Android P introduces several actions, attributes, and methods to make it easier for you to work with the accessibility framework in order to enhance accessibility services for users.
To learn more about how to make your app more accessible and build accessibility services, see Accessibility.
Navigation semantics
We've added new attributes that you can use to improve navigation from one part of the screen to another. You can use these attributes to help users move through text in your app and bring users to a specific section in your app's UI quickly.
For example, in a shopping app, a screen reader could navigate users directly from one category of deals to the next, without having to move through each item within those categories.
Accessibility pane titles
Prior to Android P, accessibility services couldn't easily determine when a specific section of the screen had updated, such as during fragment transitions.
In Android P, sections now have titles called accessibility pane titles. Accessibility services can receive changes to those titles, enabling them to provide more granular information about what has changed.
To specify the title of a section, use the new android:accessibilityPaneTitle
attribute. You can also update the title of a UI section that you replace at runtime using setAccessibilityPaneTitle()
. For example, you could provide a title for the content area of a Fragment
object.
Heading-based navigation
If your app displays content that includes logical headers, set the new android:accessibilityHeading
attribute to true
for the instances of View
that represent those headers. This allows users to navigate from one heading to the next. This navigation process is particularly handy when users are interacting with a screen reader.
Group navigation and output
Screen readers have traditionally used the android:focusable
attribute to determine which sections of the screen should be read as units. Sometimes, these screen readers need to dictate the contents of several View
objects as a single unit. That way, users can understand that these views are logically related to one another.
Prior to Android P, you needed to mark each inner View
object as non-focusable and the group containing them as focusable. This arrangement caused some instances of View
to be marked focusable in a way that made keyboard navigation more cumbersome.
In Android P, you can use the new android:screenReaderFocusable
attribute in place of the android:focusable
attribute in situations where making a View
object focusable has undesirable side effects. Screen readers should focus on all elements that have set either android:screenReaderFocusable
or android:focusable
to true
.
Convenience actions
Android P adds support for performing convenience actions on behalf of users:
- Interaction with tooltips
New features in the accessibility framework give you access to tooltips in an app's UI. Use
getTooltipText()
to read the text of a tooltip, and use the newACTION_SHOW_TOOLTIP
andACTION_HIDE_TOOLTIP
to instruct instances ofView
to show or hide their tooltips.
- New global actions
Android P introduces support for two new device actions in the
AccessibilityService
class. Your service can now help users lock their devices and take screenshots using theGLOBAL_ACTION_LOCK_SCREEN
andGLOBAL_ACTION_TAKE_SCREENSHOT
actions, respectively.
Window change details
Android P makes it easier to track updates to an app's windows when an app redraws multiple windows simultaneously. When a TYPE_WINDOWS_CHANGED
event occurs, use the getWindowChanges()
API to determine how the windows have changed. During a multiwindow update, each window now produces its own set of events. The getSource()
method returns the root view of the window associated with each event.
If an app has defined accessibility pane titles for its View
objects, your service can recognize when the app's UI is updated. When aTYPE_WINDOW_STATE_CHANGED
event occurs, use the new types returned by getContentChangeTypes()
to determine how the window has changed. For example, the framework can now detect when a pane has a new title, or when a pane has disappeared.
Rotation
To eliminate unintentional rotations, we’ve added a new mode that pins the current orientation even if the device position changes. Users can trigger rotation manually when needed by pressing a new button in the system bar.
The compatibility impacts for apps should be very minimal in most cases. However, if your app has any customized rotate behavior or uses any esoteric screen orientation settings, you might run into issues that could have gone unnoticed before when user rotation preference was always set to portrait. We encourage you to take a look at the rotation behavior in all the key activities of your app and make sure that all of your screen orientation settings are still providing the optimal experience.
For more details, see the associated behavior changes.
A new rotation mode lets users trigger rotation manually when needed using a button in the system bar.