Considering the current state of art deep learning algorithms, we might not be able to come up with a single algorithm or network which can build an advanced application like Auria. But the components of Auria’s creative pursuit can be emulated using individual state of art algorithms. This vision settled upon choosing a pipeline architecture for Auria.
Engineering Architecture of Auria
The engineering pipeline of Auria consists of mainly three components.
An LSTM based language model, trained on 3.5 lakhs Haikus scraped from Reddit. The model is used to generate artificial poetry.
A text to image network, called AttnGAN from Microsoft Research, which converts the generated Haiku to an abstract image.
A photorealistic style transfer algorithm which selects a random style image from WikiArt dataset, and transfer color and brush strokes to the generated image. The WikiArt dataset is a collection of 4k+ curated artworks, which are aggregated on the basis of emotions induced on human beings when the artwork is shown to them.
Challenges on pipelining different algorithms
Stacking individual state of the art algorithms helped us to build Auria, but the challenge of this approach was to link these components and work together in a common space. The potential problems we ran into are,
Modifying the official implementations of the research papers which are developed and tested in different environments, eg: Python versions.
Some of the algorithms which use GPUs to train and test are tightly coupled with the CUDA versions.
Each algorithm needs to be in a closed container so that it can be represented in a common production platform without disrupting the other environments.
The data flow between the components should be fluid.
Deep Learning algorithms demands high computation gear. Along with isolation in steps, we required powerful computation resources like GPUs in each step.
Deploying Auria as a web application for people to come and experience her creative pursuit considering the diverse development settings.
Machine learning workflow is a pipeline process which includes preparing the data, build, train and tune models, then deploy the best model to production to get the predictions. Azure Machine Learning pipelines redefine machine learning workflows that can be used as a template for machine learning scenarios.
We’ve adapted this conception of AML pipelines to create an advanced application like Auria. The conversion to the platform was not difficult since the basic building blocks of AML pipelines are designed with a great envision to build scaled applications.
Why AML Pipelines for Auria?
The most popular programming language in the Machine Learning realm is Python. AML Pipelines has the Python 3 SDK, we were not worried about moving the existing stack to the platform. All three algorithms we use for Auria are implemented in Python and we could replicate the results using the SDK without any hassle.
In Auria’s pipeline, we have models trained by ourselves as well as the models which use the pre-trained weights. The development environments of these algorithms were distinctive, we needed strict isolation at each step of the pipeline. Thanks to the platform, each step in an AML pipeline is a dockerized container. It helps to build individual steps without disturbing the setting of others. All the dockerized steps are portable, we could reuse these components for multiple experiments.
Each step has to provision to attach a compute engine, which can be CPU or GPU as per the need. We’ve used powerful GPU instances for quick training and tuning the hyperparameters of our models. Distributed computation facility is also available in the platform for creating parallelizing the heavy computation needs.
For our style transfer algorithm, the CUDA dependency was strict. It was not matching the default docker environment of the platform. Thankfully, Azure Machine Learning platform allows adding custom docker containers rather than using default containers for every application. This feature gives absolute freedom for recreating almost any configurations in AML Pipelines.
Deploying Auria to experience her creative process is something we are currently working on. AML Pipeline deployment helps to bypass the time to be spent on building the backend APIs. Deployment readily provides REST endpoints to the pipeline output, which can be consumed as per convenience.
Auria is a perfect use-case of Azure Machine learning pipelines considering the above perks we enjoyed while using this platform. On further collaboration with Microsoft Azure ML team, we are planning to scale up Auria by strengthening her creative pipeline with more advanced algorithms, creating an interactive experience for her followers by deploying her online and try new varieties of artificially generated digital art content.