Amazon Web Services Feed
Using speaker diarization for streaming transcription with Amazon Transcribe and Amazon Transcribe Medical
Conversational audio data that requires transcription, such as phone calls, doctor visits, and online meetings, often has multiple speakers. In these use cases, it’s important to accurately label the speaker and associate them to the audio content delivered. For example, you can distinguish between a doctor’s questions and a patient’s responses in the transcription of a live medical consultation.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to applications. With the launch of speaker diarization for streaming transcriptions, you can use Amazon Transcribe and Amazon Transcribe Medical to label the different speakers in real-time customer service calls, conference calls, live broadcasts, or clinical visits. Speaker diarziation or speaker labeling is critical to creating accurate transcription because of its ability to distinguish what each speaker said. This is typically represented by speaker A and speaker B. Speaker identification usually refers to when the speakers are specifically identified as Sally or Alfonso. With speaker diarization, you can request Amazon Transcribe and Amazon Transcribe Medical to accurately label up to five speakers in an audio stream. Although Amazon Transcribe can label more than five speakers in a stream, the accuracy of speaker diarization decreases if you exceed that number. In some cases, the different speakers may be on different channels (e.g. Call Center). In those cases you can use Amazon Transcribe Channel Identification to separate multiple channels from within a live audio stream to generate transcripts that label each audio channel
This post uses an example application to show you how to use the AWS SDK for Java to start a stream that enables you to stream your conversational audio from your microphone to Amazon Transcribe, and receive transcripts in real time with speaker labeling. The solution is a Java application that you can use to transcribe streaming audio from multiple speakers in real time. The application labels each speaker in the transcription results, which can be exported.
You can find the application in the GitHub repo. We include detailed steps to set up and run the application in this post.
Prerequisites
You need an AWS account to proceed with the solution. Additionally, the AmazonTranscribeFullAccess
policy is attached to the AWS Identity and Access Management (IAM) role you use for this demo. To create an IAM role with the necessary permissions, complete the following steps:
- Sign in to the AWS Management Console and open the IAM console.
- On the navigation pane, under Access management, choose Roles.
- You can use an existing IAM role to create and run transcription jobs, or choose Create role.
- Under Common use cases, choose EC2. You can select any use case, but EC2 is one of the most straightforward ones.
- Choose Next: Permissions.
- For the policy name, enter
AmazonTranscribeFullAccess
. - Choose Next: Tags.
- Choose Next: Review.
- For Role name, enter a role name.
- Remove the text under Role description.
- Choose Create role.
- Choose the role you created.
- Choose Trust relationships.
- Choose Edit trust relationship.
- Replace the trust policy text in your role with the following code:
Solution overview
Amazon Transcribe streaming transcription enables you to send a live audio stream to Amazon Transcribe and receive a stream of text in real time. You can label different speakers in either HTTP/2 or Websocket streams. Speaker diarization works best for labeling between two and five speakers. Although Amazon Transcribe can label more than five speakers in a stream, the accuracy of speaker separation decreases if you exceed five speakers.
To start an HTTP/2 stream, we specify the ShowSpeakerLabel request parameter of the StartStreamTranscription operation in our demo solution. See the following code:
Amazon Transcribe streaming returns a “result”
object as part of the transcription response element that can be used to label the speakers in the transcript. To learn more about the parameters in this result object, see Response Syntax.
Our solution demonstrates speaker diarization during transcription for real-time audio captured via the microphone. Amazon Transcribe breaks your incoming audio stream based on natural speech segments, such as a change in speaker or a pause in the audio. The transcription is returned progressively to your application, with each response containing more transcribed speech until the entire segment is transcribed. For more information, see Identifying Speakers.
Launching the application
Complete the following prerequisites to launch the Java application. If you already have JavaFX or Java and Maven installed, you can skip the first two sections (Installing JavaFX and Installing Maven). For all environment variables mentioned in the following steps, a good option is to add it to the ~/.bashrc
file and apply these variables as required by typing “source ~/.bashrc”
after you open a shell.
Installing JDK
As your first step, download and install Java SE. When the installation is complete, set the JAVA_HOME
variable (see the following code). Make sure to select the path to the correct Java version and confirm the path is valid.
Installing JavaFX
For instructions on downloading and installing JavaFX, see Getting Started with JavaFX. Set up the environment variable as described in the instructions or by entering for following code (replace path/to with the directory where you installed JavaFX):
Test your JavaFX installation as shown in the sample application on GitHub.
Installing Maven
Download the latest version of Apache Maven. For installation instructions, see Installing Apache Maven.
Installing the AWS CLI (Optional)
As an optional step, you can install the AWS Command Line Interface (AWS CLI). For instructions, see Installing, updating, and uninstalling the AWS CLI version 2. You can use the AWS CLI to validate and troubleshoot the solution as needed.
Setting up AWS access
Lastly, set up your access key and secret access key required for programmatic access to AWS. For instructions, see Programmatic access. Choose a Region closest to your location. For more information, see the Amazon Transcribe Streaming section in Service Endpoints.
When you know the Region and access keys, open a terminal window in your computer and assign them to environment variables for access within our solution:
- export AWS_ACCESS_KEY_ID=<access-key>
- export AWS_SECRET_ACCESS_KEY=<secret-access-key>
- export AWS_REGION=<aws region>
Solution demonstration
The following video demonstrates how you can compile and run the Java application presented in this post. Use the following sections to walk through these steps yourself.
The quality of the transcription results depends on many factors. For example, the quality can be affected by artifacts such as background noise, speakers talking over each other, complex technical jargon, the volume disparity between speakers, and the audio recording devices you use. You can use a variety of capabilities provided by Amazon Transcribe to improve transcription quality. For example, you can use custom vocabularies to recognize out-of-lexicon terms. You can even use custom language models, which enables you to use your own data to build domain-specific models. For more information, see Improving Domain-Specific Transcription Accuracy with Custom Language Models.
Setting up the solution
To implement the solution, complete the following steps:
- Clone the solution’s GitHub repo in your local computer using the following command:
- Navigate to the main directory of the solution
aws-transcribe-streaming-example-java
with the following code:
- Compile the source code and build a package for running our solution:
- Enter
mvn compile
. If the compile is successful, you should aBUILD SUCCESS
message. If there are errors in compilation, it’s most likely related to JavaFX path issues. Fix the issues based on the instructions in the Installing JavaFX section in this post. - Enter
mvn clean package
. You should see aBUILD SUCCESS
message if everything went well. This command compiles the source files and creates a packaged JAR file that we use to run our solution. If you’re repeating the build exercise, you don’t need to entermvn compile
every time.
- Enter
- Run the solution by entering the following code:
If you receive an error, it’s likely because you already had a version of Java or JavaFX and Maven installed and skipped the steps to install JDK and JavaFX in this post. In so, enter the following code:
You should see a Java UI window open.
Running the demo solution
Follow the steps in this section to run the demo yourself. You need two to five speakers present to try out the speaker diarization functionality. This application requires that all speakers use the same audio input when speaking.
- Choose Start Microphone Transcription in the Java UI application.
- Use your computer’s microphone to stream audio of two or more people (not more than five) conversing.
- As of this writing, Amazon Transcribe speaker labeling supports real-time streams that are in US English
You should see the speaker designations and the corresponding transcript appearing in the In-Progress Transcriptions window as the conversation progresses. When the transcript is complete, it should appear in the Final Transcription window.
- Choose Save Full Transcript to store the transcript locally in your computer.
Conclusion
This post demonstrated how you can easily infuse your applications with real-time ASR capabilities using Amazon Transcribe streaming and showcased an important new feature that enables speaker diarization in real-time audio streams.
With Amazon Transcribe and Amazon Transcribe Medical, you can use speaker separation to generate real-time insights from your conversations such as in-clinic visits or customer service calls and send these to downstream applications for natural language processing, or you can send it to human loops for review using Amazon Augmented AI (Amazon A2I). For more information, see Improving speech-to-text transcripts from Amazon Transcribe using custom vocabularies and Amazon Augmented AI.
About the Authors
Prem Ranga is an Enterprise Solutions Architect based out of Houston, Texas. He is part of the Machine Learning Technical Field Community and loves working with customers on their ML and AI journey. Prem is passionate about robotics, is an Autonomous Vehicles researcher, and also built the Alexa-controlled Beer Pours in Houston and other locations.
Talia Chopra is a Technical Writer in AWS specializing in machine learning and artificial intelligence. She works with multiple teams in AWS to create technical documentation and tutorials for customers using Amazon SageMaker, MxNet, and AutoGluon. In her free time, she enjoys meditating, studying machine learning, and taking walks in nature.
Parsa Shahbodaghi is a Technical Writer in AWS specializing in machine learning and artificial intelligence. He writes the technical documentation for Amazon Transcribe and Amazon Transcribe Medical. In his free time, he enjoys meditating, listening to audiobooks, weightlifting, and watching stand-up comedy. He will never be a stand-up comedian, but at least his mom thinks he’s funny.
Mahendar Gajula is a Sr. Data Architect at AWS. He works with AWS customers in their journey to the cloud with a focus on data lake, data warehouse, and AI/ML projects. In his spare time, he enjoys playing tennis and spending time with his family.