6 Star 1 Fork 17

OpenHarmony / multimedia_audio_standard

加入 Gitee
與超過 1200 萬 開發者一起發現、參與優秀開源項目,私有倉庫也完全免費 :)
免費加入
克隆/下載
貢獻代碼
同步代碼
取消
提示: 由於 Git 不支持空文件夾,創建文件夾後會生成空的 .keep 文件
Loading...
README
Apache-2.0

Audio

Introduction

The audio_standard repository is used to implement audio-related features, including audio playback, recording, volume management and device management.

Figure 1 Position in the subsystem architecture

Basic Concepts

  • Sampling

Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.

  • Sampling rate

Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, and 96 kHz.

  • Channel

Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.

  • Audio frame

Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.

  • PCM

Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.

Directory Structure

The structure of the repository directory is as follows:

/foundation/multimedia/audio_framework  # Audio code
├── frameworks                         # Framework code
│   ├── native                         # Internal Native API Implementation.
|   |                                    Pulseaudio, libsndfile build configuration and pulseaudio-hdi modules
│   └── js                             # External JS API Implementation
        └── napi                       # JS NAPI API Implementation
├── interfaces                         # Interfaces
│   ├── inner_api                      # Internal Native APIs
│   └── kits                           # External JS APIs
├── sa_profile                         # Service configuration profile
├── services                           # Service code
├── LICENSE                            # License file
└── ohos.build                         # Build file

Usage Guidelines

Audio Playback

You can use APIs provided in this repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following steps describe how to use AudioRenderer to develop the audio playback function:

  1. Use Create API with required renderer configuration to get AudioRenderer instance.

    AudioRendererOptions rendererOptions;
    rendererOptions.streamInfo.samplingRate = AudioSamplingRate::SAMPLE_RATE_44100;
    rendererOptions.streamInfo.encoding = AudioEncodingType::ENCODING_PCM;
    rendererOptions.streamInfo.format = AudioSampleFormat::SAMPLE_S16LE;
    rendererOptions.streamInfo.channels = AudioChannel::STEREO;
    rendererOptions.rendererInfo.contentType = ContentType::CONTENT_TYPE_MUSIC;
    rendererOptions.rendererInfo.streamUsage = StreamUsage::STREAM_USAGE_MEDIA;
    rendererOptions.rendererInfo.rendererFlags = 0;
    
    unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(rendererOptions);
  2. (Optional) Static APIs GetSupportedFormats(), GetSupportedChannels(), GetSupportedEncodingTypes(), GetSupportedSamplingRates() can be used to get the supported values of the params.

  3. (Optional) use audioRenderer->GetRendererInfo(AudioRendererInfo &) and audioRenderer->GetStreamInfo(AudioStreamInfo &) to retrieve the current renderer configuration values.

  4. Inorder to listen to Audio Interrupt and state change events, it would be required to register to RendererCallbacks using audioRenderer->SetRendererCallback

    class AudioRendererCallbackImpl : public AudioRendererCallback {
        void OnInterrupt(const InterruptEvent &interruptEvent) override
        {
            if (interruptEvent.forceType == INTERRUPT_FORCE) { // Forced actions taken by the framework
                switch (interruptEvent.hintType) {
                    case INTERRUPT_HINT_PAUSE:
                        // Force paused. Pause Writing.
                        isRenderPaused_ = true;
                    case INTERRUPT_HINT_STOP:
                        // Force stopped. Stop Writing.
                        isRenderStopped_ = true;
                }
            }
            if (interruptEvent.forceType == INTERRUPT_SHARE) { // Actions not forced, apps can choose to handle.
                switch (interruptEvent.hintType) {
                    case INTERRUPT_HINT_PAUSE:
                        // Do Pause, if required.
                    case INTERRUPT_HINT_RESUME:
                        // After force pause, resume if needed when this hint is received.
                        audioRenderer->Start();
                }
            }
        }
    
        void OnStateChange(const RendererState state) override
        {
            switch (state) {
                case RENDERER_PREPARED:
                    // Renderer prepared
                case RENDERER_RUNNING:
                    // Renderer in running state
                case RENDERER_STOPPED:
                    // Renderer stopped
                case RENDERER_RELEASED:
                    // Renderer released
                case RENDERER_PAUSED:
                    // Renderer paused
            }
        }
    }
    
    std::shared_ptr<AudioRendererCallback> audioRendererCB = std::make_shared<AudioRendererCallbackImpl>();
    audioRenderer->SetRendererCallback(audioRendererCB);

    Implement AudioRendererCallback class, override OnInterrupt function and register this instance using SetRendererCallback API. On registering to the callback, the application would receive the interrupt events.

    This will have information on the audio interrupt forced action taken by the Audio framework and also the action hints to be handled by the application. Refer to audio_renderer.h and audio_info.h for more details.

    Similarly, renderer state change callbacks can be received by overriding OnStateChange function in AudioRendererCallback class. Refer to audio_renderer.h for the list of renderer states.

  5. In order to get callbacks for frame mark position and/or frame period position, register for the corresponding callbacks in audio renderer using audioRenderer->SetRendererPositionCallback and/or audioRenderer->SetRendererPeriodPositionCallback functions respectively.

    class RendererPositionCallbackImpl : public RendererPositionCallback {
        void OnMarkReached(const int64_t &framePosition) override
        {
            // frame mark reached
            // framePosition is the frame mark number
        }
    }
    
    std::shared_ptr<RendererPositionCallback> framePositionCB = std::make_shared<RendererPositionCallbackImpl>();
    //markPosition is the frame mark number for which callback is requested.
    audioRenderer->SetRendererPositionCallback(markPosition, framePositionCB);
    
    class RendererPeriodPositionCallbackImpl : public RendererPeriodPositionCallback {
        void OnPeriodReached(const int64_t &frameNumber) override
        {
            // frame period reached
            // frameNumber is the frame period number
        }
    }
    
    std::shared_ptr<RendererPeriodPositionCallback> periodPositionCB = std::make_shared<RendererPeriodPositionCallbackImpl>();
    //framePeriodNumber is the frame period number for which callback is requested.
    audioRenderer->SetRendererPeriodPositionCallback(framePeriodNumber, periodPositionCB);

    For unregistering the position callbacks, call the corresponding audioRenderer->UnsetRendererPositionCallback and/or audioRenderer->UnsetRendererPeriodPositionCallback APIs.

  6. Call audioRenderer->Start() function on the AudioRenderer instance to start the playback task.

  7. Get the buffer length to be written, using GetBufferSize API.

    audioRenderer->GetBufferSize(bufferLen);
  8. Read the audio data to be played from the source(for example, an audio file) and transfer it into the bytes stream. Call the Write function repeatedly to write the render data.

    bytesToWrite = fread(buffer, 1, bufferLen, wavFile);
    while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) {
        int32_t retBytes = audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten);
        if (bytesWritten < 0)
            break;
        bytesWritten += retBytes;
    }
  9. In case of audio interrupts, application can encounter write failures. Interrupt unaware applications can check the renderer state using GetStatus API before writing audio data further. Interrupt aware applications will have more details accessible via AudioRendererCallback..

    while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) {
        int32_t retBytes = audioRenderer->Write(buffer.get() + bytesWritten, bytesToWrite - bytesWritten);
        if (retBytes < 0) { // Error occurred
            if (audioRenderer_->GetStatus() == RENDERER_PAUSED) { // Query the state and take appropriate action
                isRenderPaused_ = true;
                int32_t seekPos = bytesWritten - bytesToWrite;
                fseek(wavFile, seekPos, SEEK_CUR))
            }
            break;
        }
        bytesWritten += retBytes;
    }
  10. Call audioRenderer->Drain() to drain the playback stream.

  11. Call audioRenderer->Stop() function to Stop rendering.

  12. After the playback task is complete, call the audioRenderer->Release() function on the AudioRenderer instance to release the stream resources.

  13. Use audioRenderer->SetVolume(float) and audioRenderer->GetVolume() to set and get Track Volume. Value ranges from 0.0 to 1.0

Provided the basic playback usecase above.

Please refer audio_renderer.h and audio_info.h for more such useful APIs.

Audio Recording

You can use the APIs provided in this repository for your application to record voices using input devices, convert the voices into audio data, and manage recording tasks. The following steps describe how to use AudioCapturer to develop the audio recording function:

  1. Use Create API with required capturer configuration to get AudioCapturer instance.

    AudioCapturerOptions capturerOptions;
    capturerOptions.streamInfo.samplingRate = AudioSamplingRate::SAMPLE_RATE_48000;
    capturerOptions.streamInfo.encoding = AudioEncodingType::ENCODING_PCM;
    capturerOptions.streamInfo.format = AudioSampleFormat::SAMPLE_S16LE;
    capturerOptions.streamInfo.channels = AudioChannel::MONO;
    capturerOptions.capturerInfo.sourceType = SourceType::SOURCE_TYPE_MIC;
    capturerOptions.capturerInfo.capturerFlags = CAPTURER_FLAG;;
    
    unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(capturerOptions);
  2. (Optional) Static APIs GetSupportedFormats(), GetSupportedChannels(), GetSupportedEncodingTypes(), GetSupportedSamplingRates() can be used to get the supported values of the params.

  3. (Optional) use audioCapturer->GetCapturerInfo(AudioCapturerInfo &) and audioCapturer->GetStreamInfo(AudioStreamInfo &) to retrieve the current capturer configuration values.

  4. Capturer state change callbacks can be received by overriding OnStateChange function in AudioCapturerCallback class, and registering the callback instance using audioCapturer->SetCapturerCallback API.

    class AudioCapturerCallbackImpl : public AudioCapturerCallback {
        void OnStateChange(const CapturerState state) override
        {
            switch (state) {
                case CAPTURER_PREPARED:
                    // Capturer prepared
                case CAPTURER_RUNNING:
                    // Capturer in running state
                case CAPTURER_STOPPED:
                    // Capturer stopped
                case CAPTURER_RELEASED:
                    // Capturer released
            }
        }
    }
    
    std::shared_ptr<AudioCapturerCallback> audioCapturerCB = std::make_shared<AudioCapturerCallbackImpl>();
    audioCapturer->SetCapturerCallback(audioCapturerCB);
  5. In order to get callbacks for frame mark position and/or frame period position, register for the corresponding callbacks in audio capturer using audioCapturer->SetCapturerPositionCallback and/or audioCapturer->SetCapturerPeriodPositionCallback functions respectively.

    class CapturerPositionCallbackImpl : public CapturerPositionCallback {
        void OnMarkReached(const int64_t &framePosition) override
        {
            // frame mark reached
            // framePosition is the frame mark number
        }
    }
    
    std::shared_ptr<CapturerPositionCallback> framePositionCB = std::make_shared<CapturerPositionCallbackImpl>();
    //markPosition is the frame mark number for which callback is requested.
    audioCapturer->SetCapturerPositionCallback(markPosition, framePositionCB);
    
    class CapturerPeriodPositionCallbackImpl : public CapturerPeriodPositionCallback {
        void OnPeriodReached(const int64_t &frameNumber) override
        {
            // frame period reached
            // frameNumber is the frame period number
        }
    }
    
    std::shared_ptr<CapturerPeriodPositionCallback> periodPositionCB = std::make_shared<CapturerPeriodPositionCallbackImpl>();
    //framePeriodNumber is the frame period number for which callback is requested.
    audioCapturer->SetCapturerPeriodPositionCallback(framePeriodNumber, periodPositionCB);

    For unregistering the position callbacks, call the corresponding audioCapturer->UnsetCapturerPositionCallback and/or audioCapturer->UnsetCapturerPeriodPositionCallback APIs.

  6. Call audioCapturer->Start() function on the AudioCapturer instance to start the recording task.

  7. Get the buffer length to be read, using GetBufferSize API.

    audioCapturer->GetBufferSize(bufferLen);
  8. Read the captured audio data and convert it to a byte stream. Call the read function repeatedly to read data until you want to stop recording

    // set isBlocking = true/false for blocking/non-blocking read
    bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking);
    while (numBuffersToCapture) {
        bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead);
        if (bytesRead < 0) {
            break;
        } else if (bytesRead > 0) {
            fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file
            numBuffersToCapture--;
        }
    }
  9. (Optional) Call audioCapturer->Flush() to flush the capture buffer of this stream.

  10. Call the audioCapturer->Stop() function on the AudioCapturer instance to stop the recording.

  11. After the recording task is complete, call the audioCapturer->Release() function on the AudioCapturer instance to release the stream resources.

Provided the basic recording usecase above. Please refer audio_capturer.h and audio_info.h for more APIs.

Audio Management

You can use the APIs provided in audio_system_manager.h to control volume and device.

  1. Use GetInstance API to get AudioSystemManager instance.
    AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance();

Volume Control

  1. Use GetMaxVolume and GetMinVolume APIs to query the Maximum & Minimum volume level allowed for the stream. Use this volume range to set the volume.
    AudioSystemManager::AudioVolumeType streamType = AudioSystemManager::AudioVolumeType::STREAM_MUSIC;
    int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType);
    int32_t minVol = audioSystemMgr->GetMinVolume(streamType);
  2. Use SetVolume and GetVolume APIs to set and get the volume level of the stream.
    int32_t result = audioSystemMgr->SetVolume(streamType, 10);
    int32_t vol = audioSystemMgr->GetVolume(streamType);
  3. Use SetMute and IsStreamMute APIs to set and get the mute status of the stream.
    int32_t result = audioSystemMgr->SetMute(streamType, true);
    bool isMute = audioSystemMgr->IsStreamMute(streamType);
  4. Use SetRingerMode and GetRingerMode APIs to set and get ringer modes. Refer AudioRingerMode enum in audio_info.h for supported ringer modes.
    int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT);
    AudioRingerMode ringMode = audioSystemMgr->GetRingerMode();
  5. Use SetMicrophoneMute and IsMicrophoneMute APIs to mute/unmute the mic and to check if mic muted.
    int32_t result = audioSystemMgr->SetMicrophoneMute(true);
    bool isMicMute = audioSystemMgr->IsMicrophoneMute();

Device control

  1. Use GetDevices, deviceType_ and deviceRole_ APIs to get audio I/O devices information. For DeviceFlag, DeviceType and DeviceRole enums refer audio_info.h.

    DeviceFlag deviceFlag = ALL_DEVICES_FLAG;
    vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors = audioSystemMgr->GetDevices(deviceFlag);
    for (auto &audioDeviceDescriptor : audioDeviceDescriptors) {
        cout << audioDeviceDescriptor->deviceType_ << endl;
        cout << audioDeviceDescriptor->deviceRole_ << endl;
    }
  2. Use SetDeviceActive and IsDeviceActive APIs to Actiavte/Deactivate the device and to check if the device is active.

    ActiveDeviceType deviceType = SPEAKER;
    int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true);
    bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType);
  3. Use SetDeviceChangeCallback API to register for device change events. Clients will receive callback when a device is connected/disconnected. Currently audio subsystem supports sending device change events for WIRED_HEADSET, USB_HEADSET and BLUETOOTH_A2DP device. OnDeviceChange function will be called and client will receive DeviceChangeAction object, which will contain following parameters:
    type : DeviceChangeType which specifies whether device is connected or disconnected.
    deviceDescriptors : Array of AudioDeviceDescriptor object which specifies the type of device and its role(input/output device).

    class DeviceChangeCallback : public AudioManagerDeviceChangeCallback {
    public:
       DeviceChangeCallback = default;
       ~DeviceChangeCallback = default;
       void OnDeviceChange(const DeviceChangeAction &deviceChangeAction) override
       {
           cout << deviceChangeAction.type << endl;
           for (auto &audioDeviceDescriptor : deviceChangeAction.deviceDescriptors) {
               switch (audioDeviceDescriptor->deviceType_) {
                   case DEVICE_TYPE_WIRED_HEADSET: {
                       if (deviceChangeAction.type == CONNECT) {
                           cout << wired headset connected << endl;
                       } else {
                           cout << wired headset disconnected << endl;
                       }
                       break;
                   }
                   case DEVICE_TYPE_USB_HEADSET:{
                       if (deviceChangeAction.type == CONNECT) {
                           cout << usb headset connected << endl;
                       } else {
                           cout << usb headset disconnected << endl;
                       }
                       break;
                   }
                   case DEVICE_TYPE_BLUETOOTH_A2DP:{
                       if (deviceChangeAction.type == CONNECT) {
                           cout << Bluetooth device connected << endl;
                       } else {
                           cout << Bluetooth device disconnected << endl;
                       }
                       break;
                   }
                   default: {
                       cout << "Unsupported device" << endl;
                       break;
                   }
               }
           }
       }
    };
    
    auto callback = std::make_shared<DeviceChangeCallback>();
    audioSystemMgr->SetDeviceChangeCallback(callback);
  4. Other useful APIs such as IsStreamActive, SetAudioParameter and GetAudioParameter are also provided. Please refer audio_system_manager.h for more details

  5. Applications can register for change in system volume using AudioManagerNapi::On. Here when an application registers to volume change event, whenever there is change in volume, the application is notified with following parameters: volumeType : The AudioVolumeType for which volume is updated volume : The curret volume level set. updateUi : Whether the volume change details need to be shown or not. (If volume is updated through volume key up/down we set the updateUi flag to true, in other scenarios the updateUi is set as false).

    const audioManager = audio.getAudioManager();
    
    export default {
         onCreate() {
             audioManager.on('volumeChange', (volumeChange) ==> {
                 console.info('volumeType = '+volumeChange.volumeType);
                 console.info('volume = '+volumeChange.volume);
                 console.info('updateUi = '+volumeChange.updateUi);
             }
         }
    }

Audio Scene

  1. Use SetAudioscene and getAudioScene APIs to change and check the audio strategy, respectively.
    int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL);
    AudioScene audioScene = audioSystemMgr->GetAudioScene();

Please refer AudioScene enum in audio_info.h for supported audio scenes.

JavaScript Usage:

JavaScript apps can use the APIs provided by audio manager to control the volume and the device.
Please refer js-apis-audio.md for complete JavaScript APIs available for audio manager.

Ringtone Management

You can use the APIs provided in iringtone_sound_manager.h and iringtone_player.h for ringtone playback functions.

  1. Use CreateRingtoneManager API to get IRingtoneSoundManager instance.
    std::shared_ptr<IRingtoneSoundManager> ringtoneManagerClient = RingtoneFactory::CreateRingtoneManager();
  2. Use SetSystemRingtoneUri API to set the system ringtone uri.
    std::string uri = "/data/media/test.wav";
    RingtoneType ringtoneType = RINGTONE_TYPE_DEFAULT;
    ringtoneManagerClient->SetSystemRingtoneUri(context, uri, ringtoneType);
  3. Use GetRingtonePlayer API to get IRingtonePlayer instance.
    std::unique_ptr<IRingtonePlayer> ringtonePlayer = ringtoneManagerClient->GetRingtonePlayer(context, ringtoneType);
  4. Use Configure API to configure the ringtone player.
    float volume = 1;
    bool loop = true;
    ringtonePlayer.Configure(volume, loop);
  5. Use Start, Stop, and Release APIs on ringtone player instance to control playback states.
    ringtonePlayer.Start();
    ringtonePlayer.Stop();
    ringtonePlayer.Release();
  6. Use GetTitle API to get the title of current system ringtone.
  7. Use GetRingtoneState to the the ringtone playback state - RingtoneState
  8. Use GetAudioRendererInfo to get the AudioRendererInfo to check the content type and stream usage.

Supported devices

Currently following are the list of device types supported by audio subsystem.

  1. USB Type-C Headset
    Digital headset which includes their own DAC (Digital to Analogue Converter) and amp as part of the headset.
  2. WIRED Headset
    Analog headset which doesn't contain any DAC inside. It can have 3.5mm jack or Type-C jack without DAC.
  3. Bluetooth Headset
    Bluetooth A2DP(Advanced Audio Distribution Profile) headset used for streaming audio wirelessly.
  4. Internal Speaker and MIC
    Internal speaker and mic is supported and will be used as default device for playback and record respectively.

Repositories Involved

multimedia_audio_standard

Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS

簡介

Audio management implementation | 音频管理功能实现 展開 收起
Apache-2.0
取消

發行版

暫無發行版

貢獻者

全部

近期動態

加載更多
不能加載更多了
1
https://gitee.com/openharmony/multimedia_audio_standard.git
git@gitee.com:openharmony/multimedia_audio_standard.git
openharmony
multimedia_audio_standard
multimedia_audio_standard
master

搜索幫助