WELCOME TO ANDROID'S EXAMPLE!

This is a typical configuration for a device with multiple external storage devices, where the primary device is backed by internal storage on the device, and where the secondary device is a physical SD card.
The raw physical device must first be mounted under /mnt/media_rw where only the system and FUSE daemon can access it. vold will then manage the fuse_sdcard1 service when media is inserted/removed.

fstab.hardware

[physical device node]  auto  vfat  defaults  voldmanaged=sdcard1:auto

init.hardware.rc

on init
    mkdir /mnt/shell/emulated 0700 shell shell
    mkdir /storage/emulated 0555 root root

    mkdir /mnt/media_rw/sdcard1 0700 media_rw media_rw
    mkdir /storage/sdcard1 0700 root root

    export EXTERNAL_STORAGE /storage/emulated/legacy
    export EMULATED_STORAGE_SOURCE /mnt/shell/emulated
    export EMULATED_STORAGE_TARGET /storage/emulated
    export SECONDARY_STORAGE /storage/sdcard1

on fs
    setprop ro.crypto.fuse_sdcard true

service sdcard /system/bin/sdcard -u 1023 -g 1023 -l /data/media /mnt/shell/emulated
    class late_start

service fuse_sdcard1 /system/bin/sdcard -u 1023 -g 1023 -w 1023 -d /mnt/media_rw/sdcard1 /storage/sdcard1
    class late_start
    disabled

storage_list.xml

<storage
    android:storageDescription="@string/storage_internal"
    android:emulated="true"
    android:mtpReserve="100" />
<storage
    android:mountPoint="/storage/sdcard1"
    android:storageDescription="@string/storage_sd_card"
    android:removable="true"
    android:maxFileSize="4096" />

Read more....

This is a typical configuration for a device with single external storage device which is backed by internal storage on the device.

init.hardware.rc

on init
    mkdir /mnt/shell/emulated 0700 shell shell
    mkdir /storage/emulated 0555 root root

    export EXTERNAL_STORAGE /storage/emulated/legacy
    export EMULATED_STORAGE_SOURCE /mnt/shell/emulated
    export EMULATED_STORAGE_TARGET /storage/emulated

on fs
    setprop ro.crypto.fuse_sdcard true

service sdcard /system/bin/sdcard -u 1023 -g 1023 -l /data/media /mnt/shell/emulated
    class late_start

storage_list.xml

<storage
    android:storageDescription="@string/storage_internal"
    android:emulated="true"
    android:mtpReserve="100" />

Example of external storage configurations as of Android 4.4 for typical devices. Only the relevant portions of the configuration files are included. 

This is a typical configuration for a device with single external storage device which is a physical SD card.
The raw physical device must first be mounted under /mnt/media_rw where only the system and FUSE daemon can access it. vold will then manage the fuse_sdcard0 service when media is inserted/removed.

fstab.hardware

[physical device node]  auto  vfat  defaults  voldmanaged=sdcard0:auto,noemulatedsd

init.hardware.rc

on init
    mkdir /mnt/media_rw/sdcard0 0700 media_rw media_rw
    mkdir /storage/sdcard0 0700 root root

    export EXTERNAL_STORAGE /storage/sdcard0

service fuse_sdcard0 /system/bin/sdcard -u 1023 -g 1023 -d /mnt/media_rw/sdcard0 /storage/sdcard0
    class late_start
    disabled

storage_list.xml

<storage
    android:mountPoint="/storage/sdcard0"
    android:storageDescription="@string/storage_sd_card"
    android:removable="true"
    android:primary="true"
    android:maxFileSize="4096" />
Read more...

External storage is managed by a combination of the vold init service and MountService system servic. Mounting of physical external storage volumes is handled by vold, which performs staging operations to prepare the media before exposing it to apps.
For Android 4.2.2 and earlier, the device-specific vold.fstab configuration file defines mappings from sysfs devices to filesystem mount points, and each line follows this format:

dev_mount <label> <mount_point> <partition> <sysfs_path> [flags]

  • label: Label for the volume.
  • mount_point: Filesystem path where the volume should be mounted.
  • partition: Partition number (1 based), or 'auto' for first usable partition.
  • sysfs_path: One or more sysfs paths to devices that can provide this mount point. Separated by spaces, and each must start with /.
  • flags: Optional comma separated list of flags, must not contain /. Possible values include nonremovable and encryptable.
For Android releases 4.3 and later, the various fstab files used by init, vold and recovery were unified in the /fstab.<device> file. For external storage volumes that are managed by vold, the entries should have the following format:

<src> <mnt_point> <type> <mnt_flags> <fs_mgr_flags>
 
  • src: A path under sysfs (usually mounted at /sys) to the device that can provide the mount point. The path must start with /.
  • mount_point: Filesystem path where the volume should be mounted.
  • type: The type of the filesystem on the volume. For external cards, this is usually vfat.
  • mnt_flags: Vold ignores this field and it should be set to defaults
  • fs_mgr_flags: Vold ignores any lines in the unified fstab that do not include the voldmanaged= flag in this field. This flag must be followed by a label describing the card, and a partition number or the word auto. Here is an example: voldmanaged=sdcard:auto. Other possible flags are nonremovable, encryptable=sdcard, and noemulatedsd.
External storage interactions at and above the framework level are handled through MountService. The device-specific storage_list.xml configuration file, typically provided through a frameworks/base overlay, defines the attributes and constraints of storage devices. The <StorageList> element contains one or more <storage> elements, exactly one of which should be marked primary. <storage> attributes include:
  • mountPoint: filesystem path of this mount.
  • storageDescription: string resource that describes this mount.
  • primary: true if this mount is the primary external storage.
  • removable: true if this mount has removable media, such as a physical SD card.
  • emulated: true if this mount is emulated and is backed by internal storage, possibly using a FUSE daemon.
  • mtp-reserve: number of MB of storage that MTP should reserve for free storage. Only used when mount is marked as emulated.
  • allowMassStorage: true if this mount can be shared via USB mass storage.
  • maxFileSize: maximum file size in MB.
Devices may provide external storage by emulating a case-insensitive, permissionless filesystem backed by internal storage. One possible implementation is provided by the FUSE daemon in system/core/sdcard, which can be added as a device-specific init.rc service:
 
# virtual sdcard daemon running as media_rw (1023)
service sdcard /system/bin/sdcard <source_path> <dest_path> 1023 1023
    class late_start

Where source_path is the backing internal storage and dest_path is the target mount point.
When configuring a device-specific init.rc script, the EXTERNAL_STORAGE environment variable must be defined as the path to the primary external storage. The /sdcard path must also resolve to the same location, possibly through a symlink. If a device adjusts the location of external storage between platform updates, symlinks should be created so that old paths continue working. Read more...


Starting in Android 4.2, devices can support multiple users, and external storage must meet the following constraints:
  • Each user must have their own isolated primary external storage, and must not have access to the primary external storage of other users.
  • The /sdcard path must resolve to the correct user-specific primary external storage based on the user a process is running as.
  • Storage for large OBB files in the Android/obb directory may be shared between multiple users as an optimization.
  • Secondary external storage must not be writable by apps, except in package-specific directories as allowed by synthesized permissions.
The default platform implementation of this feature leverages Linux kernel namespaces to create isolated mount tables for each Zygote-forked process, and then uses bind mounts to offer the correct user-specific primary external storage into that private namespace.

At boot, the system mounts a single emulated external storage FUSE daemon at EMULATED_STORAGE_SOURCE, which is hidden from apps. After the Zygote forks, it bind mounts the appropriate user-specific subdirectory from under the FUSE daemon to EMULATED_STORAGE_TARGET so that external storage paths resolve correctly for the app. Because an app lacks accessible mount points for other users' storage, they can only access storage for the user it was started as.

This implementation also uses the shared subtree kernel feature to propagate mount events from the default root namespace into app namespaces, which ensures that features like ASEC containers and OBB mounting continue working correctly. It does this by mounting the rootfs as shared, and then remounting it as slave after each Zygote namespace is created. Read more...

Starting in Android 4.4, multiple external storage devices are surfaced to developers through Context.getExternalFilesDirs(), Context.getExternalCacheDirs(), and Context.getObbDirs().

External storage devices surfaced through these APIs must be a semi-permanent part of the device (such as an SD card slot in a battery compartment). Developers expect data stored in these locations to be available over long periods of time. For this reason, transient storage devices (such as USB mass storage drives) should not be surfaced through these APIs.

The WRITE_EXTERNAL_STORAGE permission must only grant write access to the primary external storage on a device. Apps must not be allowed to write to secondary external storage devices, except in their package-specific directories as allowed by synthesized permissions. Restricting writes in this way ensures the system can clean up files when applications are uninstalled. Read more...

Android supports devices with external storage, which is defined to be a case-insensitive filesystem with immutable POSIX permission classes and modes. External storage can be provided by physical media (such as an SD card), or by exposing a portion of internal storage through an emulation layer. Devices may contain multiple instances of external storage.

Access to external storage is protected by various Android permissions. Starting in Android 1.0, write access is protected with the WRITE_EXTERNAL_STORAGE permission. Starting in Android 4.1, read access is protected with the READ_EXTERNAL_STORAGE permission.

Starting in Android 4.4, the owner, group and modes of files on external storage devices are now synthesized based on directory structure. This enables apps to manage their package-specific directories on external storage without requiring they hold the broad WRITE_EXTERNAL_STORAGE permission. For example, the app with package name com.example.foo can now freely access Android/data/com.example.foo/ on external storage devices with no permissions. These synthesized permissions are accomplished by wrapping raw storage devices in a FUSE daemon.

Since external storage offers minimal protection for stored data, system code should not store sensitive data on external storage. Specifically, configuration and log files should only be stored on internal storage where they can be effectively protected. Read more...


IDrmEngine

IDrmEngine is an interface with a set of APIs to suit DRM use cases. Plug-in developers must implement the interfaces specified in IDrmEngine and the listener interfaces specified below. This document assumes the plug-in developer has access to the Android source tree. The interface definition is available in the source tree at:
<<platform_root>/frameworks/base/drm/libdrmframework/plugins/common/include

DRM Info

DrmInfo is a wrapper class that wraps the protocol for communicating with the DRM server. Server registration, deregistration, license acquisition, or any other server-related transaction can be achieved by processing an instance of DrmInfo. The protocol should be described by the plug-in in XML format. Each DRM plug-in would accomplish the transaction by interpreting the protocol. The DRM framework defines an API to retrieve an instance of DrmInfo called acquireDrmInfo().
DrmInfo* acquireDrmInfo(int uniqueId, const DrmInfoRequest* drmInfoRequest);
Retrieves necessary information for registration, deregistration or rights acquisition information. See DrmInfoRequest for more information.
DrmInfoStatus* processDrmInfo(int uniqueId, const DrmInfo* drmInfo);
processDrmInfo() behaves asynchronously and the results of the transaction can be retrieved either from OnEventListener or OnErrorListener.

DRM rights

The association of DRM content and the license is required to allow playback of DRM content. Once the association has been made, the license will be handled in the DRM framework so the Media Player application is abstracted from the existence of license.
int checkRightsStatus(int uniqueId, const String8& path, int action);
Save DRM rights to the specified rights path and make association with content path. The input parameters are DrmRights to be saved, rights file path where rights are to be saved and content file path where content was saved.
status_t saveRights(int uniqueId, const DrmRights& drmRights, const String8& rightsPath, const String8& contentPath);
Save DRM rights to specified rights path and make association with content path.

License Metadata

License metadata such as license expiry time, repeatable count and etc., may be embedded inside the rights of the protected content. The Android DRM framework provides APIs to return constraints associated with input content. See DrmManagerClient for more information.
DrmConstraints* getConstraints(int uniqueId, const String path, int action);
The getConstraint function call returns key-value pairs of constraints embedded in protected content. To retrieve the constraints, the uniqueIds (the Unique identifier for a session and path of the protected content) are required. The action, defined as Action::DEFAULT, Action::PLAY, etc., is also required.

DrmMetadata* getMetadata(int uniqueId, const String path);
Get metadata information associated with input content for a given path of the protected content to return key-value pairs of metadata.

Decrypt session

To maintain the decryption session, the caller of the DRM framework has to invoke openDecryptSession() at the beginning of the decryption sequence. openDecryptSession() asks each DRM plug-in if it can handle input DRM content.
status_t openDecryptSession( int uniqueId, DecryptHandle* decryptHandle, int fd, off64_t offset, off64_t length);
The above call allows you to save DRM rights to specified rights path and make association with content path. DrmRights parameter is the rights to be saved, file path where rights should be and content file path where content should be saved.

DRM plug-in Listeners

Some APIs in DRM framework behave asynchronously in a DRM transaction. An application can register three listener classes to DRM framework.
  • OnEventListener for results of asynchronous APIs
  • OnErrorListener for recieving errors of asynchronous APIs
  • OnInfoListener for any supplementary information during DRM transactions.

Source

The Android DRM framework includes a passthru plug-in as a sample plug-in. The implementation for passthru plug-in can be found in the Android source tree at:
<platform_root>/frameworks/base/drm/libdrmframework/plugins/passthru

Build and Integration

Add the following to the Android.mk of the plug-in implementation. The passthruplugin is used as a sample.
PRODUCT_COPY_FILES += $(TARGET_OUT_SHARED_LIBRARIES)/<plugin_library>:system/lib/drm/plugins/native/<plugin_library> e.g.,
PRODUCT_COPY_FILES += $(TARGET_OUT_SHARED_LIBRARIES)/ libdrmpassthruplugin.so:system/lib/drm/plugins/native/libdrmpassthruplugin.so
Plug-in developers must locate their respective plug-ins under this directory like so:
/system/lib/drm/plugins/native/libdrmpassthruplugin.so Read more...


The plug-in developer should ensure the plug-in is located in the DRM framework plug-in discovery directory. See implementation instructions for details. Read more...



As shown in the figure, the DRM framework uses a plug-in architecture to support various DRM schemes. The DRM manager service runs in an independent process to ensure isolated execution of DRM plug-ins. Each API call from DrmManagerClient to DrmManagerService goes across process boundaries by using the binder IPC mechanism. The DrmManagerClient provides a Java programming language implementation as a common interface to runtime applications; it also provides a DrmManagerClient-native implementation as the interface to native modules. The caller of DRM framework accesses only the DrmManagerClient and does not have to be aware of each DRM scheme. 

Plug-ins are loaded automatically when DrmManagerService is launched. As shown in the figure below, the DRM plug-in manager loads/unloads all the available plug-ins. The DRM framework loads plug-ins automatically by finding them under:
/system/lib/drm/plugins/native/



The plug-in developer should ensure the plug-in is located in the DRM framework plug-in discovery directory. See implementation instructions for details. Read more...


Availability of rich digital content is important to users on mobile devices. To make their content widely available, Android developers and digital content publishers need a consistent DRM implementation supported across the Android ecosystem. In order to make that digital content available on Android devices and to ensure that there is at least one consistent DRM available across all devices, Google provides DRM without any license fees on compatible Android devices. On Android 3.0 and higher platforms, the DRM plug-in is integrated with the Android DRM framework and can use hardware-backed protection to secure premium content and user credentials. 

The content protection provided by the DRM plug-in depends on the security and content protection capabilities of the underlying hardware platform. The hardware capabilities of the device include hardware secure boot to establish a chain of trust of security and protection of cryptographic keys. Content protection capabilities of the device include protection of decrypted frames in the device and content protection via a trusted output protection mechanism. Not all hardware platforms support all of the above security and content protection features. Security is never implemented in a single place in the stack, but instead relies on the integration of hardware, software, and services. The combination of hardware security functions, a trusted boot mechanism, and an isolated secure OS for handling security functions is critical to providing a secure device. Read more...

The Android platform provides an extensible DRM framework that lets applications manage rights-protected content according to the license constraints associated with the content. The DRM framework supports many DRM schemes; which DRM schemes a device supports is up to the device manufacturer. The DRM framework introduced in Android 3.0 provides a unified interface for application developers and hides the complexity of DRM operations. The DRM framework provides a consistent operation mode for protected and non-protected content. DRM schemes can define very complex usage models by license metadata. The DRM framework provides the association between DRM content and license, and handles the rights management. This enables the media player to be abstracted from DRM-protected or non-protected content. See MediaDrm for the class to obtain keys for decrypting protected media streams. Read more...





The DRM framework is designed to be implementation agnostic and abstracts the details of the specific DRM scheme implementation in a scheme-specific DRM plug-in. The DRM framework includes simple APIs to handle complex DRM operations, register users and devices to online DRM services, extract constraint information from the license, associate DRM content and its license, and finally decrypt DRM content.
The Android DRM framework is implemented in two architectural layers:
  • A DRM framework API, which is exposed to applications through the Android application framework and runs through the Dalvik VM for standard applications.
  • A native code DRM manager, which implements the DRM framework and exposes an interface for DRM plug-ins (agents) to handle rights management and decryption for various DRM schemes.
See the Android DRM package reference for additional details. Read more...


Android's camera HAL connects the higher level camera framework APIs in android.hardware to your underlying camera driver and hardware. The figure and list describe the components involved and where to find the source for each: 

Application framework
At the application framework level is the app's code, which utilizes the android.hardware.Camera API to interact with the camera hardware. Internally, this code calls a corresponding JNI glue class to access the native code that interacts with the camera.
JNI
The JNI code associated with android.hardware.Camera is located in frameworks/base/core/jni/android_hardware_Camera.cpp. This code calls the lower level native code to obtain access to the physical camera and returns data that is used to create the android.hardware.Camera object at the framework level.
Native framework
The native framework defined in frameworks/av/camera/Camera.cpp provides a native equivalent to the android.hardware.Camera class. This class calls the IPC binder proxies to obtain access to the camera service.
Binder IPC proxies
The IPC binder proxies facilitate communication over process boundaries. There are three camera binder classes that are located in the frameworks/av/camera directory that calls into camera service. ICameraService is the interface to the camera service, ICamera is the interface to a specific opened camera device, and ICameraClient is the device's interface back to the application framework.
Camera service
The camera service, located in frameworks/av/services/camera/libcameraservice/CameraService.cpp, is the actual code that interacts with the HAL.
HAL
The hardware abstraction layer defines the standard interface that the camera service calls into and that you must implement to have your camera hardware function correctly.
Kernel driver
The camera's driver interacts with the actual camera hardware and your implementation of the HAL. The camera and driver must support YV12 and NV21 image formats to provide support for previewing the camera image on the display and video recording.

Implementing the HAL


The HAL sits between the camera driver and the higher level Android framework and defines an interface that you must implement so that apps can correctly operate the camera hardware. The HAL interface is defined in the hardware/libhardware/include/hardware/camera.h and hardware/libhardware/include/hardware/camera_common.h header files.
camera_common.h defines an important struct, camera_module, which defines a standard structure to obtain general information about the camera, such as its ID and properties that are common to all cameras such as whether or not it is a front or back-facing camera.
camera.h contains the code that corresponds mainly with android.hardware.Camera. This header file declares a camera_device struct that contains a camera_device_ops struct with function pointers that point to functions that implement the HAL interface. For documentation on the different types of camera parameters that a developer can set, see the frameworks/av/include/camera/CameraParameters.h file. These parameters are set with the function pointed to by int (*set_parameters)(struct camera_device *, const char *parms) in the HAL.
For an example of a HAL implementation, see the implementation for the Galaxy Nexus HAL in hardware/ti/omap4xxx/camera.

Configuring the Shared Library


You need to set up the Android build system to correctly package the HAL implementation into a shared library and copy it to the appropriate location by creating an Android.mk file:
  1. Create a device/<company_name>/<device_name>/camera directory to contain your library's source files.
  2. Create an Android.mk file to build the shared library. Ensure that the Makefile contains the following lines: 
    LOCAL_MODULE := camera.<device_name>
    LOCAL_MODULE_RELATIVE_PATH := hw
    
    Notice that your library must be named camera.<device_name> (.so is appended automatically), so that Android can correctly load the library. For an example, see the Makefile for the Galaxy Nexus camera located in hardware/ti/omap4xxx/Android.mk.
  3. Specify that your device has camera features by copying the necessary feature XML files in the frameworks/native/data/etc directory with your device's Makefile. For example, to specify that your device has a camera flash and can autofocus, add the following lines in your device's <device>/<company_name>/<device_name>/device.mk Makefile:
    PRODUCT_COPY_FILES := \ ...
    
    PRODUCT_COPY_FILES += \
    frameworks/native/data/etc/android.hardware.camera.flash-autofocus.xml:system/etc/permissions/android.hardware.camera.flash-autofocus.xml \    
    
    For an example of a device Makefile, see device/samsung/tuna/device.mk.
  4. Declare your camera’s media codec, format, and resolution capabilities in device/<company_name>/<device_name>/media_profiles.xml and device/<company_name>/<device_name>/media_codecs.xml XML files. For more information, see Exposing Codecs and Profiles to the Framework for information on how to do this.
  5. Add the following lines in your device's device/<company_name>/<device_name>/device.mk Makefile to copy the media_profiles.xml and media_codecs.xml files to the appropriate location:
    # media config xml file
    PRODUCT_COPY_FILES += \
        <device>/<company_name>/<device_name>/media_profiles.xml:system/etc/media_profiles.xml
    # media codec config xml file
    PRODUCT_COPY_FILES += \
        <device>/<company_name>/<device_name>/media_codecs.xml:system/etc/media_codecs.xml
    
  6. Declare that you want to include the Camera app in your device's system image by specifying it in the PRODUCT_PACKAGES variable in your device's device/<company_name>/<device_name>/device.mk Makefile:
    PRODUCT_PACKAGES := \
    Gallery2 \
    ...
    
    Read more...



Android provides a default Bluetooth stack, BlueDroid, that is divided into two layers: The Bluetooth Embedded System (BTE), which implements the core Bluetooth functionality and the Bluetooth Application Layer (BTA), which communicates with Android framework applications. A Bluetooth system service communicates with the Bluetooth stack through JNI and with applications through Binder IPC. The system service provides developers access to various Bluetooth profiles. The diagram shows the general structure of the Bluetooth stack: 

Application framework
At the application framework level is the app's code, which utilizes the android.bluetooth APIs to interact with the bluetooth hardware. Internally, this code calls the Bluetooth process through the Binder IPC mechanism.
Bluetooth system service
The Bluetooth system service, located in packages/apps/Bluetooth, is packaged as an Android app and implements the Bluetooth service and profiles at the Android framework layer. This app calls into the HAL layer via JNI.
JNI
The JNI code associated with android.bluetooth is located in packages/apps/Bluetooth/jni. The JNI code calls into the HAL layer and receives callbacks from the HAL when certain Bluetooth operations occur, such as when devices are discovered.
HAL
The hardware abstraction layer defines the standard interface that the android.bluetooth APIs and Bluetooth process calls into and that you must implement to have your bluetooth hardware function correctly. The header files for the Bluetooth HAL is located in the hardware/libhardware/include/hardware/bluetooth.h and hardware/libhardware/include/hardware/bt_*.h files.
Bluetooth stack
The default Bluetooth stack is provided for you and is located in external/bluetooth/bluedroid. The stack implements the generic Bluetooth HAL as well as customizes it with extensions and configuration changes.
Vendor extensions
To add custom extensions and an HCI layer for tracing, you can create a libbt-vendor module and specify these components.

Implementing the HAL


The Bluetooth HAL is located in the hardware/libhardware/include/hardware/ directory and consists of the following header files:
  • bluetooth.h: Contains the HAL for the Bluetooth hardware on the device
  • bt_av.h: Contains the HAL for the advanced audio profile.
  • bt_hf.h: Contains the HAL for the handsfree profile.
  • bt_hh.h: Contains the HAL for the HID host profile
  • bt_hl.h: Contains the HAL for the health profile
  • bt_pan.h: Contains the HAL for the pan profile
  • bt_sock.h: Contains the HAL for the socket profile.
Keep in mind that your Bluetooth implementation is not constrained to the features and profiles exposed in the HAL. You can find the default implementation located in the BlueDroid Bluetooth stack in the external/bluetooth/bluedroid directory, which implements the default HAL and also extra features and customizations.

Customizing the BlueDroid Stack


If you are using the default BlueDroid stack, but want to make a few customizations, you can do the following things:
  • Custom Bluetooth profiles - If you want to add Bluetooth profiles that do not have HAL interfaces provided by Android, you must supply an SDK add-on download to make the profile available to app developers, make the APIs available in the Bluetooth system process app (packages/apps/Bluetooth), and add them to the BlueDroid stack (external/bluetooth/bluedroid).
  • Custom vendor extensions and configuration changes - You can add things such as extra AT commands or device-specific configuration changes by creating a libbt-vendor module. See the vendor/broadcom/libbt-vendor directory for an example.
  • Host Controller Interface (HCI) - You can provide your own HCI by creating a libbt-hci module, which is mainly used for debug tracing. See the external/bluetooth/hci directory for an example. Read more...



Application framework
At the application framework level is the app code, which utilizes the android.media APIs to interact with the audio hardware. Internally, this code calls corresponding JNI glue classes to access the native code that interacts with the audio hardware.
JNI
The JNI code associated with android.media is located in the frameworks/base/core/jni/ and frameworks/base/media/jni directories. This code calls the lower level native code to obtain access to the audio hardware.
Native framework
The native framework is defined in frameworks/av/media/libmedia and provides a native equivalent to the android.media package. The native framework calls the Binder IPC proxies to obtain access to audio-specific services of the media server.
Binder IPC
The Binder IPC proxies facilitate communication over process boundaries. They are located in the frameworks/av/media/libmedia directory and begin with the letter "I".
Media Server
The audio services in the media server, located in frameworks/av/services/audioflinger, is the actual code that interacts with your HAL implementations.
HAL
The HAL defines the standard interface that audio services call into and that you must implement to have your audio hardware function correctly. The audio HAL interfaces are located in hardware/libhardware/include/hardware. See <hardware/audio.h> for additional details.
Kernel Driver
The audio driver interacts with the hardware and your implementation of the HAL. You can choose to use ALSA, OSS, or a custom driver of your own at this level. The HAL is driver-agnostic. Note: If you do choose ALSA, we recommend using external/tinyalsa for the user portion of the driver because of its compatible licensing (The standard user-mode library is GPL licensed).
Not shown: Android native audio based on OpenSL ES. This API is exposed as part of Android NDK, and is at the same architecture level as android.media. Read more...


To start an Android user you may know how the fundamental function is such as making a call, sending a text message, changing the system settings, install and uninstall apps or setup and delete etc. Android users know these things but not enough for an android application developer. So, what else details are a developer required to know about Android. To be an android apps developer you should know all the key concepts of Android such as all the nuts and bolts of Android Operating System.

The illustration shows us the Android Architecture System. The Android OS can be referred to as a software stack of different layers, where each layer is a group of several  program components. Together it includes operating system, middle ware and important applications. Every layer in the architecture provides different types of services. The services are :

Linux Kernel

For the most part, developing your device drivers is the same as developing a typical Linux device driver. Android uses a specialized version of the Linux kernel with a few special additions such as wakelocks, a memory management system that is more agressive in preserving memory, the Binder IPC driver, and other features that are important for a mobile embedded platform like Android. These additions have less to do with driver development than with the system's functionality. You can use any version of the kernel that you want as long as it supports the required features, such as the binder driver. However, we recommend using the latest version of the Android kernel. For the latest Android kernel, see Building Kernels.

Libraries

The next layer is the Android’s native libraries. It is this layer that enables the device to handle different types of data. These libraries are written in c or c++ language and are specific for a particular hardware.

Some of the important native libraries include the following:

Surface Manager: It is used for compositing window manager with off-screen buffering. Off-screen buffering means you cant directly draw into the screen, but your drawings go to the off-screen buffer. There it is combined with other drawings and form the final screen the user will see. This off screen buffer is the reason behind the transparency of windows.

Media framework: Media framework provides different media codecs allowing the recording and playback of different media formats

SQLite: SQLite is the database engine used in android for data storage purposes

WebKit: It is the browser engine used to display HTML content

OpenGL: Used to render 2D or 3D graphics content to the screen

Android Runtime

Android Runtime consists of Dalvik Virtual machine and Core Java libraries.

Dalvik Virtual Machine
It is a type of JVM used in android devices to run apps and is optimized for low processing power and low memory environments. Unlike the JVM, the Dalvik Virtual Machine doesn’t run .class files, instead it runs .dex files. .dex files are built from .class file at the time of compilation and provides hifger efficiency in low resource environments. The Dalvik VM allows multiple instance of Virtual machine to be created simultaneously providing security, isolation, memory management and threading support. It is developed by Dan Bornstein of Google.

Core Java Libraries
These are different from Java SE and Java ME libraries. However these libraries provides most of the functionalities defined in the Java SE libraries.

Application Framework

These are the blocks that our applications directly interacts with. These programs manage the basic functions of phone like resource management, voice call management etc. As an android apps developer, you just consider these are some basic tools with which we are building our applications.

Important blocks of Application framework are:

Activity Manager: Manages the activity life cycle of applications

Content Providers: Manage the data sharing between applications

Telephony Manager: Manages all voice calls. We use telephony manager if we want to access voice calls in our application.

Location Manager: Location management, using GPS or cell tower

Resource Manager: Manage the various types of resources we use in our Application

Applications

Applications are the top layer in the Android architecture and this is where our applications are go to fit. Several standard applications comes pre-installed with every device, such as:
  • SMS client app
  • Dialer
  • Web browser
  • Contact manager
As an android application developer we are able to write an app which replace any existing system app. That is, you are not limited in accessing any particular feature. You are practically limitless, and you can do whatever you want to do with the android. Android is an opening endless opportunities to the android application developer.

To test your Android applications you will need a virtual Android device. So before we start writing our code, let us create an Android virtual device. Launch Android AVD Manager using Eclipse menu options Window > AVD Manager> which will launch Android AVD Manager.

This step will help you in setting Android Development Tool plugin for Eclipse. Let's start with launching Eclipse and then, choose Help > Software Updates > Install New Software.

To install Eclipse IDE (Integrated Development Environment), download the latest Eclipse binaries from http://www.eclipse.org/downloads/

Download the latest version of Android SDK from Android official website : Android Software Development Kit (SDK) Downloads.

http://developer.android.com/sdk/index.html

JDK setup step 01:


Go to click here to download JDK
http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html

JDK setup step 02:

http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html

 Select here your essential JDK and download.


JDK setup step 03:


After completing download the installer file jdk-7u25-windows-i586.
Then browse the installer file and run it.

JDK setup step 04:


Now is running JDK installation wizard.


 JDK setup step 05:




JDK setup step 06:



JDK setup step 07:




JDK setup step 08:



Setup has completed. So, lets enjoy.


  1. JDK Alpha and Beta (1995): Sun announced Java in September 23, 1995.
  2. JDK 1.0 (January 23, 1996): Originally called Oak (named after the oak tree outside James Gosling's office). Renamed to Java 1 in JDK 1.0.2.
  3. JDK 1.1 (February 19, 1997): Introduced AWT event model, inner class, JavaBean, JDBC, and RMI.
  4. J2SE 1.2 (codename Playground) (December 8, 1998): Re-branded as "Java 2" and renamed JDK to J2SE (Java 2 Standard Edition). Also released J2EE (Java 2 Enterprise Edition) and J2ME (Java 2 Micro Edition). Included JFC (Java Foundation Classes - Swing, Accessibility API, Java 2D, Pluggable Look and Feel and Drag and Drop). Introduced Collection Framework and JIT compiler.
  5. J2SE 1.3 (codename Kestrel) (May 8, 2000): Introduced Hotspot JVM.
  6. J2SE 1.4 (codename Merlin) (February 6, 2002): Introduced assert, non-blocking IO (nio), logging API, image IO, Java webstart, regular expression support.
  7. J2SE 5.0 (codename Tiger) (September 30, 2004): Officially called 5.0 instead of 1.5. Introduced generics, autoboxing/unboxing, annotation, enum, varargs, for-each loop, static import.
  8. Java SE 6 (codename Mustang) (December 11, 2006): Renamed J2SE to Java SE (Java Standard Edition).
  9. Java SE 7 (codename Dolphin) (July 28, 2011): First version after Oracle purchased Sun (called Oracle JDK).
  10. Java SE 8 (March 18, 2014): included support for Lambda expressions, default methods, and JavaScript runtime.

JRE

JRE (Java Runtime) is needed for running Java programs. JDK (Java Development Kit), which includes JRE plus the development tools (such as compiler and debugger), is need for writing as well as running Java programs. Since you are supposed to write Java Programs, you should install JDK, which includes JRE.

JDK

JRE (Java Runtime) is needed for running Java programs. JDK (Java Development Kit), which includes JRE plus the development tools (such as compiler and debugger), is need for writing as well as running Java programs. Since you are supposed to write Java Programs, you should install JDK, which includes JRE.


Start your Android application development on either of the following operating systems:

  • Microsoft Windows XP or later version.
  • Mac OS X 10.5.8 or later version with Intel chip.
  • Linux including GNU C Library 2.7 or later.


Required tools to develop Android applications are completely free. Just download from the internet the software you will need to set up  before you start your Android Apps Development.

  • JDK (Java Development Kit)
  • Android SDK (Android Software Development Kit)
  • Eclipse IDE for Java Developers
  • ADT (Android Development Tools)
The last two components are optional. If you are working on Windows operating system then above components make your life easy while doing Java based application development. So lets start Android Apps Development Environment Setup.

Download Android Studio and SDK Tools

Download JDK

Applications ("apps"), that extend the functionality of devices, are written primarily in the Java programming language (without the usual "write once, run anywhere" claim of the Java platform) using the Android software development kit (SDK). Once developed, Android applications can be packaged easily and sold out either through a store such as Google Play or the Amazon Appstore. Android powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the largest installed base of any mobile platform and growing fast. Every day more than 1 million new Android devices are activated worldwide.

FeatureDescription
Beautiful UIAndroid OS basic screen provides a beautiful and intuitive user interface.
ConnectivityGSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE, NFC and WiMAX.
StorageSQLite, a lightweight relational database, is used for data storage purposes.
Media supportH.263, H.264, MPEG-4 SP, AMR, AMR-WB, AAC, HE-AAC, AAC 5.1, MP3, MIDI, Ogg Vorbis, WAV, JPEG, PNG, GIF, and BMP
MessagingSMS and MMS
Web browserBased on the open-source WebKit layout engine, coupled with Chrome's V8 JavaScript engine supporting HTML5 and CSS3.
Multi-touchAndroid has native support for multi-touch which was initially made available in handsets such as the HTC Hero.
Multi-taskingUser can jump from one task to another and same time various application can run simultaneously.
Resizable widgetsWidgets are resizable, so users can expand them to show more content or shrink them to save space
Multi-LanguageSupports single direction and bi-directional text.
GCMGoogle Cloud Messaging (GCM) is a service that lets developers send short message data to their users on Android devices, without needing a proprietary sync solution.
Wi-Fi DirectA technology that lets apps discover and pair directly, over a high-bandwidth peer-to-peer connection.
Android BeamA popular NFC-based technology that lets users instantly share, just by touching two NFC-enabled phones together.



Restricted profiles for tablets

You can now limit access to apps and content at home and work. For parents, this means you can create parental controls and for retailers, you can turn their tablet into a kiosk.

Bluetooth Smart support

Bluetooth Smart minimizes power use while measuring and transmitting data for fitness sensors like Fitbit, Runtastic and other devices, making your phone or tablet more power efficient. 

Dial pad autocomplete

Just start touching numbers or letters and the dial pad will automatically suggest numbers or names.

Improved support for Hebrew and Arabic

We’ve added more support for Hebrew and Arabic speakers with new builds for right­ to ­left layouts.

Even more features

For those of you looking to go deeper, here's an exhaustive list of all the updates found in Android 4.3, categorized for quick access to the changes.

Audio
Virtual surround sound - enjoy movies from Google Play with surround sound on Nexus 7 (2013 edition) and other Nexus devices.
Graphics
OpenGL ES 3.0 - Android now supports the latest version of the industry standard for high performance graphics.
Wireless Display for Nexus 7 (2013 edition) and Nexus 10 - project from your tablet to a TV.
Input
Easier text input - an improved algorithm for tap-typing recognition makes text input easier. Lower latency input for gamepad buttons and joysticks.
Location
Location detection through Wi-Fi - use Wi-Fi to detect location without turning on Wi-Fi all the time.
Networking
Bluetooth Smart support (a.k.a. Bluetooth Low-Energy) - devices like Nexus 4 and Nexus 7 (2013 edition) are now Bluetooth Smart Ready.
Bluetooth AVRCP 1.3 support - display song names on a car stereo.
Settings
Disabled apps tab - check which apps are disabled in Settings > Apps.
System
Restricted profiles - put your tablet into a mode with limited access to apps and content.
Setup wizard simplification - getting started on Android is easier thanks to the ability to correct previous input, and because of streamlined user agreements.
Faster user switching - switching users from the lock screen is now faster.
Enhanced photo daydream - navigate through interesting albums.




Popular Posts