Imagine that you’re working on a machine learning (ML) project and you’ve found your champion model. What happens next? For many, the project ends there,…
Imagine that you’re working on a machine learning (ML) project and you’ve found your champion model. What happens next? For many, the project ends there, with their models sitting isolated in a Jupyter notebook. Others will take the initiative to convert their notebooks to scripts for somewhat production-grade code.
Both of these end points restrict a project’s accessibility, requiring knowledge of source code hosting sites like GitHub and Bitbucket. A better solution is to convert your project into a prototype with a frontend that can be deployed on internal servers.
While a prototype may not be production standard, it’s an effective technique companies use to provide stakeholders with insight into a proposed solution. This then allows the company to collect feedback and develop better iterations in the future.
To develop a prototype, you will need:
- A frontend for user interaction
- A backend that can process requests
Both requirements can take a significant amount of time to build, however. In this tutorial, you will learn how to rapidly build your own machine learning web application using Streamlit for your frontend and FastAPI for your microservice, simplifying the process. Learn more about microservices in Building a Machine Learning Microservice with FastAPI.
You can try the application featured in this tutorial using the code in the kurtispykes/car-evaluation-project GitHub repository.
Overview of Streamlit and FastAPI
Streamlit, an open-source app framework, aims to simplify the process of building web applications for machine learning and data science. It has been gaining a significant amount of traction in the applied ML community in recent years. Founded in 2018, Streamlit was born out of the frustrations of ex-Google engineers faced with the challenges experienced by practitioners when deploying machine learning models and dashboards.
Using the Streamlit framework, data scientists and machine learning practitioners can build their own predictive analytics web applications in a few hours. There is no need to depend on front-end engineers or knowledge of HTML, CSS, or Javascript since it’s all done in Python.
FastAPI has also had a rapid rise to prominence among Python developers. It’s a modern web framework, also initially released in 2018, that was designed to compensate in almost all areas in which Flask falls flat. One of the great things about switching to FastAPI is the learning curve is not so steep, especially if you already know Flask. With FastAPI you can expect thorough documentation, short development times, simple testing, and easy deployment. This makes it possible to develop RESTful APIs in Python.
By combining the power of the two frameworks, it’s possible to develop an exciting machine learning application you could share with your friends, colleagues, and stakeholders in less than a day.
Build a full-stack machine learning application
The following steps guide you through building a simple classification model using FastAPI and Streamlit. This model evaluates whether a car is acceptable based on the following six input features:
buying
: The cost to buy the carmaint
: The cost of maintenancedoors
: The number of doorspersons
: The carrying capacity (number of people)lug_boot
: The size of the luggage bootsafety
: The estimated safety
You can download the full Car Evaluation dataset from the UCI machine learning repository.
After you have done all of the data analysis, trained your champion model, and packaged the machine learning model, the next step is to create two dedicated services: 1) the FastAPI backend and 2) the Streamlit frontend. These two services can then be deployed in two Docker containers and orchestrated using Docker Compose.
Each service requires its own Dockerfile to assemble the Docker images. A Docker Compose YAML file is also required to define and share both container applications. The following steps work through the development of each service.
The user interface
In the car_evaluation_streamlit
package, create a simple user-interface in the app.py
file using Streamlit. The code below includes:
- A title for the UI
- A short description of the project
- Six interactive elements the user will use to input information about a car
- Class values returned by the API
- A submit button that, when clicked, will send all data collected from the user to the machine learning API service as a post request and then display the response from the model
import requests
import streamlit as st
# Define the title
st.title("Car evaluation web application")
st.write(
"The model evaluates a cars acceptability based on the inputs below.
Pass the appropriate details about your car using the questions below to discover if your car is acceptable."
)
# Input 1
buying = st.radio(
"What are your thoughts on the car's buying price?",
("vhigh", "high", "med", "low")
)
# Input 2
maint = st.radio(
"What are your thoughts on the price of maintenance for the car?",
("vhigh", "high", "med", "low")
)
# Input 3
doors = st.select_slider(
"How many doors does the car have?",
options=["2", "3", "4", "5more"]
)
# Input 4
persons = st.select_slider(
"How many passengers can the car carry?",
options=["2", "4", "more"]
)
# Input 5
lug_boot = st.select_slider(
"What is the size of the luggage boot?",
options=["small", "med", "big"]
)
# Input 6
safety = st.select_slider(
"What estimated level of safety does the car provide?",
options=["low", "med", "high"]
)
# Class values to be returned by the model
class_values = {
0: "unacceptable",
1: "acceptable",
2: "good",
3: "very good"
}
# When 'Submit' is selected
if st.button("Submit"):
# Inputs to ML model
inputs = {
"inputs": [
{
"buying": buying,
"maint": maint,
"doors": doors,
"persons": persons,
"lug_boot": lug_boot,
"safety": safety
}
]
}
# Posting inputs to ML API
response = requests.post(f"http://host.docker.internal:8001/api/v1/predict/", json=inputs, verify=False)
json_response = response.json()
prediction = class_values[json_response.get("predictions")[0]]
st.subheader(f"This car is **{prediction}!**")
The only framework required for this service is Streamlit. In the requirements.txt
file, note the version of Streamlit to install when creating the Docker image.
streamlit>=1.12.0,
Now, add the Dockerfile to create the docker image for this service:
FROM python:3.9.4
WORKDIR /opt/car_evaluation_streamlit
ADD ./car_evaluation_streamlit /opt/car_evaluation_streamlit
RUN pip install --upgrade pip
RUN pip install -r /opt/car_evaluation_streamlit/requirements.txt
EXPOSE 8501
CMD ["streamlit", "run", "app.py"]
Each command creates a layer and each layer is an image.
The REST API
REpresentational State Transfer Application Programming Interfaces (REST APIs) is a software architecture that enables two applications to communicate with one another. In technical terms, a REST API transfers the state of a requested resource to the client. In this scenario, the requested resource will be a prediction from the machine learning model.
The API built with FastAPI can be found in the car_evaluation_api
package. Locate the app/main.py
file, which is used to run the application. For more information about how the API was developed, see Building a Machine Learning microservice with FastAPI.
from typing import Any
from fastapi import APIRouter, FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import HTMLResponse
from loguru import logger
from app.api import api_router
from app.config import settings, setup_app_logging
# setup logging as early as possible
setup_app_logging(config=settings)
app = FastAPI(
title=settings.PROJECT_NAME, openapi_url=f"{settings.API_V1_STR}/openapi.json"
)
root_router = APIRouter()
@root_router.get("/")
def index(request: Request) -> Any:
"""Basic HTML response."""
body = (
""
""
"Welcome to the API
"
""
"Check the docs: here"
""
""
""
)
return HTMLResponse(content=body)
app.include_router(api_router, prefix=settings.API_V1_STR)
app.include_router(root_router)
# Set all CORS enabled origins
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
if __name__ == "__main__":
# Use this for debugging purposes only
logger.warning("Running in development mode. Do not run like this in production.")
import uvicorn
uvicorn.run(app, host="localhost", port=8001, log_level="debug")
The code above defines the server, which includes three endpoints:
"/"
: An endpoint used to define a body that returns an HTML response"/health"
: An endpoint to return the health response schema of the model"/predict"
: An endpoint used to serve predictions from the trained model
You may only see the "/"
endpoint in the code above: this is because the "/health"
and "/predict"
endpoints were imported from the API module and added to the application router.
Next, save the dependencies for the API service in the requirements.txt
file:
--extra-index-url="https://repo.fury.io/kurtispykes/"
car-evaluation-model==1.0.0
uvicorn>=0.18.2, =0.79.0, =0.0.5, =1.9.1, =3.10.0, =0.6.0,
Note: An extra index was added to pip to install the packaged model from Gemfury.
Next, add the Dockerfile to the car_evalutation_api
package.
FROM python:3.9.4
# Create the user that will run the app
RUN adduser --disabled-password --gecos '' ml-api-user
WORKDIR /opt/car_evaluation_api
ARG PIP_EXTRA_INDEX_URL
# Install requirements, including from Gemfury
ADD ./car_evaluation_api /opt/car_evaluation_api
RUN pip install --upgrade pip
RUN pip install -r /opt/car_evaluation_api/requirements.txt
RUN chmod +x /opt/car_evaluation_api/run.sh
RUN chown -R ml-api-user:ml-api-user ./
USER ml-api-user
EXPOSE 8001
CMD ["bash", "./run.sh"]
Both services have been created, as well as the instructions to build the containers for each service.
The next step is to wire the containers together so you can start using your machine learning application. Before proceeding, make sure you have Docker and Docker Compose installed. Reference the Docker Compose installation guide if necessary.
Wire the Docker containers
To wire the containers together, locate the docker-compose.yml
file in the packages/
directory.
The contents of the Docker Compose file are provided below:
version: '3'
services:
car_evaluation_streamlit:
build:
dockerfile: car_evaluation_streamlitDockerfile
ports:
- 8501:8501
depends_on:
- car_evaluation_api
car_evaluation_api:
build:
dockerfile: car_evaluation_apiDockerfile
ports:
- 8001:8001
This file defines the version of Docker Compose to use, defines the two services to be wired together, the ports to expose, and the paths to their respective Dockerfiles. Note that the car_evaluation_streamlit
service informs Docker Compose that it depends on the car_evaluation_api
service.
To test the application, navigate to the project root from your command prompt (the location of the docker-compose.yml
file). Then run the following command to build the images and spin up both containers:
docker-compose up -d --build
It may take a minute or two to build the images. Once the Docker images are built, you can navigate to http://localhost:8501 to use the application.
Figure 1 shows the six model inputs outlined at the beginning of this post:
- The car buying price (low, medium, high, very high)
- The car’s maintenance costs (low, medium, high, very high)
- The number of doors the car has (2, 3, 4, 5+)
- The number of passengers the car can carry (2, 4, more)
- The size of the luggage boot (small, medium, big).
- The expected safety of the car (low, medium, high)
Summary
Congratulations—you have just created your own full-stack machine learning web application. The next steps may involve deploying the application on the web using services such as Heroku Cloud, Google App Engine, or Amazon EC2.
Streamlit enables developers to rapidly build aesthetically pleasing user interfaces for data science and machine learning. A working knowledge of Python is all that is required to get started with Streamlit. FastAPI is a modern web framework designed to compensate in most areas where Flask falls flat. You can use Streamlit and FastAPI backend together to build a full-stack web application with Docker and Docker Compose.