Amazon Web Services Feed
Generating compositions in the style of Bach using the AR-CNN algorithm in AWS DeepComposer
AWS DeepComposer gives you a creative way to get started with machine learning (ML) and generative AI techniques. AWS DeepComposer recently launched a new generative AI algorithm called autoregressive convolutional neural network (AR-CNN), which allows you to generate music in the style of Bach. In this blog post, we show a few examples of how you can use the AR-CNN algorithm to generate interesting compositions in the style of Bach and explain how the algorithm’s parameters impact the characteristics of the generated composition.
The AR-CNN algorithm provided in the AWS DeepComposer console offers a variety of parameters to generate unique compositions, such as the number of iterations and the maximum number of notes to add to or remove from the input melody to generate unique compositions. The parameter values will directly impact the extent to which you modify the input melody. For example, setting the maximum number of notes to add to a high value allows the algorithm to add additional notes it predicts are suitable for a composition in the style of Bach music.
The AR-CNN algorithm allows you to collaborate iteratively with the machine learning algorithm by experimenting with the parameters; you can use the output from one iteration of the AR-CNN algorithm as input to the next iteration.
For more information about the algorithm’s concepts, see the Introduction to autoregressive convolutional neural network learning capsule available in the AWS DeepComposer console. Learning capsules provide easy-to-consume, bite-size modules to help you learn the concepts of generative AI algorithms.
Bach is widely regarded as one of the greatest composers of all time. His compositions represent the best of the Baroque era. Listen to a few examples from original Bach compositions to familiarize yourself with his music:
Composition 1:
Composition 2:
Composition 3:
The AR-CNN algorithm enhances the original input melody by adding or removing notes from the input melody. If the algorithm detects off-key or extraneous notes, it may choose to remove them. If it identifies certain notes that are highly probable in a Bach composition, it may choose to add them. Listen to the following example of an enhanced composition that is generated by applying the AR-CNN algorithm.
Input:
Enhanced composition:
You can change the values of the AR-CNN algorithm parameters to generate music with different characteristics.
AR-CNN parameters in AWS DeepComposer
In the previous section, you heard an example of a composition created in the style of Bach music using AR-CNN algorithm. The following section explores how the algorithm parameters provided in the AWS DeepComposer console influence the generated compositions. The following parameters are available in the console: Sampling iterations, Maximum number of notes to remove or add, and Creative risk.
Sampling iterations
This parameter controls the number of times the input melody is passed through the algorithm for it to add or remove notes. As the sampling iterations parameter increases, the model gets more chances to add or remove notes from the input melody to make the composition sound more Bach-like.
On the AWS DeepComposer console, the Music Studio has a limit of 100 sampling iterations in a single run. You can choose to increase the sampling iterations beyond 100 by feeding the generated music as an input to the algorithm again. Listen to the generated output for the following input melody “Me and my jar” at different iterations
Me and My Jar original input melody:
Me and my jar output at iteration 100:
Me and my jar output at iteration 500:
After a certain number of sampling iterations, you can observe that the generated music doesn’t change much, even after going through more iterations. At this stage, the model has improved the input melody as much as it can. Further iterations may cause it to add notes, and promptly remove them, or vice versa. Thus, the generated music remains mostly unchanged with more iterations after this stage.
Maximum number of notes to remove
This parameter allows you to specify the maximum percentage of your original composition that the algorithm can remove. Setting this number to 0% ensures the input melody is completely preserved. Setting this number to 100% removes a majority of the original notes. Even with a value of 100%, the algorithm may choose to retain parts of your input melody depending on the quality and similarity to Bach music. Listen to the following example of the generated music for “Me and My Jar” when the maximum notes to remove parameter is set to 100%.
Me and My Jar original input melody:
Me and My Jar at 100% removal:
Me and My Jar at 0% removal:
At 100% removal, it’s difficult to detect the original composition in the generated composition. At 0%, the algorithm preserves your original input melody but the algorithm is limited in its ability to enhance the music. For instance, the algorithm can’t remove off-key notes in the input melody.
Maximum number of notes to add
This parameter specifies the maximum number of notes that the algorithm can add to your input melody. Setting a low value fills in missing notes; setting a high value adds more notes to the input melody. If you don’t limit the number of notes the algorithm can add, the model attempts to make the melody as close to a Bach composition as possible.
Me and My Jar original input melody:
Me and My Jar at max 350 notes addition:
Me and My Jar at max 50 notes addition:
Notice how you can’t hear the original soundtrack anymore on adding a maximum of 350 notes. By limiting the number of notes that the algorithm can add, you can prevent the original input melody from being drowned out by the addition of new notes. The downside is that the algorithm becomes limited in its ability to generate music, which may result in less than desirable results. When we picked a maximum of 50 notes to add, there are significantly fewer notes added, so you can hear the input melody. However, the music quality isn’t high because the algorithm is limited in the number of notes it can add.
You can choose how you’d like to balance your original composition and allow the algorithm freedom in generating music by using the parameters of maximum notes to remove or add.
Creative risk
This parameter contributes to the surprise element or the creativity factor in a composition. Setting this value too low leads to more predictable music because the model only chooses safe, high-probability notes. A very high creative risk can result in more unique, and less predictable compositions. However, it may sometimes produce poor quality melodies due to the model choosing notes that are less likely to occur.
The algorithm identifies the notes by sampling from a probability distribution of what it believes is the best note to add next. The creative risk parameter allows you to adjust the shape of that probability distribution. As you increase the creative risk, the probability distribution flattens, which means that several less likely notes are chosen and receive a higher probability than before to be added to the composition. The opposite occurs as you turn down creative risk, which makes sure that the algorithm focuses on notes that it’s sure are correct. This parameter is also known as temperature in machine learning.
Output after 1000 iterations with a creative risk of 1:
Output after 1000 iterations with a creative risk of 2:
Output after 1000 iterations with a temperature of 6:
A creative risk of 1 is usually the baseline. As the creative risk increases, there are more and more scattered notes. These can add flair to your music. However, the scattered notes eventually devolve into noise because a completely flattened probability distribution is no different than random choice.
To experiment, you can turn up the creative risk for a few iterations to generate some scattered notes and then turn down the creative risk to have the algorithm use those notes when generating new music.
Conclusion
Congratulations! You’ve now learned how each parameter can affect the characteristics of your composition. In addition to changing the hyperparameters, we encourage you to try out the following next steps:
- Change the input melody. Play your own input melody using the virtual or physical AWS DeepComposer keyboard. Don’t worry if you’re not a musical expert; the AR-CNN algorithm automatically fixes mistakes.
- Crop your input melodies and see how the algorithm fills in missing sections.
- Feed your autoregressive composition into GANs to create accompaniments.
We are excited for you to try out various combinations to generate your creative musical piece. Start composing now!
About the Authors
Jyothi Nookula is a Principal Product Manager for AWS AI devices. She loves to build products that delight her customers. In her spare time, she loves to paint and host charity fund raisers for her art exhibitions.
Rahul Suresh is an Engineering Manager with the AWS AI org, where he has been working on AI based products for making machine learning accessible for all developers. Prior to joining AWS, Rahul was a Senior Software Developer at Amazon Devices and helped launch highly successful smart home products. Rahul is passionate about building machine learning systems at scale and is always looking for getting these advanced technologies in the hands of customers. In addition to his professional career, Rahul is an avid reader and a history buff.
Prachi Kumar is a ML Engineer and AI Researcher at AWS, where she has been working on AI based products to help teach users about Machine Learning. Prior to joining AWS, Prachi was a graduate student in Computer Science at UCLA, and has been focusing on ML projects and courses throughout her masters and undergrad. In her spare time, she enjoys reading and watching movies.
Wayne Chi is a ML Engineer and AI Researcher at AWS. He works on researching interesting Machine Learning problems to teach new developers and then bringing those ideas into production. Prior to joining AWS he was a Software Engineer and AI Researcher at JPL, NASA where he worked on AI Planning and Scheduling systems for the Mars 2020 Rover (Perseverance). In his spare time he enjoys playing tennis, watching movies, and learning more about AI.
Enoch Chen is a Senior Technical Program Manager for AWS AI Devices. He is a big fan of machine learning and loves to explore innovative AI applications. Recently he helped bring DeepComposer to thousands of developers. Outside of work, Enoch enjoys playing piano and listening to classical music.