top of page
Search
  • Writer's pictureMihail Eric

Fake News Detection From Ideation to Deployment: Model Deployment and Continuous Integration

Updated: Jun 2, 2022



In this post, we will continue where our previous post left us and look at deploying our model and setting up a continuous integration system. This will allow us to constantly update, improve, and test our code.


As a reminder, recall that our goal is to apply a data-driven solution to the problem of fake news detection taking it from initial setup through to deployment. The phases we will conduct include the following:


5. Deploying the model and connecting a continuous integration solution (this post!)


This article will focus on deploying our model including building a Chrome extension that can make calls to a REST API.


Afterwards we will discuss how to setup continuous integration so that we can constantly update, test, and deploy the latest version of our project.


Full source code is here.


Setting Up a Prediction Rest API


As we mentioned in the first post our goal in this project was to build a model that we can deploy as a Chrome web extension.


This will require the ability for the model make predictions in real-time, thereby necessitating an online inference solution.


There are two possible solutions at this point:

1. Run the full model client-side (i.e. in the browser).

2. Run the model server-side and make REST API calls to the server.


With regards to (1), while there has been some work on running full-models client-side, the ecosystem is not yet mature enough to make this the easiest solution.


In the case of Scikit-learn models, the path forward for (1) involved compiling Python code to WebAssembly and then running it with third-party libraries.


We will opt, instead, for the more standard solution of building a REST API that allows us to interact with a model running server-side.


To do that, we will leverage the super slick modern web framework: FastAPI. Using FastAPI, the core of our REST API looks like this:


Not too shabby at all.


This defines a single REST endpoint called /api/predict-fakeness that ingests a textual statement, runs inference on appropriately-defined datapoint, and outputs a `Prediction` response object.


We can then run the server locally with the following command:


Building a Chrome Extension


Now that we have our server running, we will create a Chrome extension that can make calls to our API.


The goal of our extension will be to allow a user browsing the Internet to highlight some segment of text (something like a news headline) and have the extension indicate whether the text is FAKE or REAL.


For a good overview of how to build an AI-powered Chrome extension, check out this post. For our purposes the full extension code is here.


The core components are the content.js:


and the manifest.json which defines the actual extension:


When our extension is running live, we can test on a collection of news headlines to get something like this:



We are now officially running a fake news detecting browser extension that leverages a machine learning model. Super cool!


You'll notice the model isn't perfect by any means and we shouldn't expect it to be.


As we discussed before, the dataset we trained on wasn't particularly big, and it's not clear it represented all the phenomena we wanted a good fake news dataset to capture.


In addition, it's not clear that the data we are seeing at inference time is consistent with the data the model was trained on. This is related to a common issue in building machine learning applications called concept drift.


Moreover our model doesn't use features we would expect to help like past relevant statements made by the speaker.


In fact, we don't even have a way of detecting the actual speaker with the extension in its current format!


All we have to go off of is the text of the headline, which as we saw before didn't provide the most salient features to the model.


To really close the user feedback loop on our live model, we would want to improve the extension by allowing the user to indicate whether a prediction was GOOD or BAD.


We would probably frame the question in the popup as Was this helpful?.


By doing this, we would literally have our users annotate live data, thereby improving the dataset we use to learn our model. This would initiate a powerful data flywheel!


This is left as an exercise to implement to the reader.


Continuous Integration


We will now discuss continuous integration in the context of machine learning projects. First off, what is continuous integration?


Continuous integration (or CI) is a broader software engineering concept that refers to the practice of automating code changes across multiple contributors in a centralized fashion.


This typically involves setting up an environment and tooling where code changes can be easily tracked, tested, and validated.


In a nutshell, CI is about scalable software engineering, enabling teams to collaborate on projects in reproducible, understandable, and rapid fashion.


Many of the CI techniques applied to traditional software engineering projects apply to machine learning projects as well. For example, the fact that we are hosting our project in a shared Github repository is already an important component of CI.


This allows multiple individuals to contribute to our codebase by submitting feature changes through pull requests.


These pull requests can run against a shared suite of tests and be reviewed by other team members for functionality/style.


If the pull request passes the test suite and is approved by other team members, then it can be merged into the main master branch of the project.


Having a robust test suite is a crucial component of such a CI system. We already started building out functionality tests. We will now take the next step of making this available to a CI workflow.


More specifically, we will make it so that every time a contributor pushes to a remote branch the functionality tests will be run against the contributor's state of the codebase.


To do that, we will leverage Github actions. Note, you could use 3rd party tools like Travis CI but we will use Github's native features because of convenience.


We will define the following action:



This action pulls a custom Docker image (custom-docker-image above, which needs to be provided), does a little bit of environment setup, and then executes our functionality tests.


Our custom Docker image is built via the following Dockerfile:



Simple enough. Add all of our project files and install the relevant Python dependencies.


We define our own Docker image because it allows us to provide our data to Github actions for the Great Expectations tests.


Admittedly, this isn't the most robust solution (what happens if our data becomes bigger than 10K datapoints).


For a more robust solution that doesn't involve baking the data into the image, check out Docker volumes.


Our Github action above will run every time a contributor pushes a commit to the repo.


We can be even more specific and only trigger the action if there is a push made to the master branch.


One additional piece of setup for the CI is to make it so that if our action doesn't pass (i.e. one of our tests fail) we aren't able to merge the feature request.


In Github that can be enabled in the *Settings* of your repo:



Nice! So now we can rest assured that if someone commits something to master, the code has passed some suite of functionality tests.


This doesn't guarantee that our code is correct, but at least it provides an initial gating mechanism.


An important component of having functional CI for your project is ensuring that the project behavior is reproducible.


In the case of a machine learning application, it should be very easy for a new contributor to the project to retrain any model.


To achieve that, we leverage DVC. DVC enables us to do a few things:


1. Version control our data using a Git-like interface.

2. Define workflows for various steps of model creation such as preprocessing, featurization, and training.


First off, we can track our raw dataset files (i.e. data/raw/train2.tsv) by running



This will create a corresponding train2.tsv.dvc file as well as a .gitignore which will prevent us from accidentally committing our (potentially large) train2.tsv data file.


We can, however, freely commit (and we should!) the train2.tsv.dvc file to our repository. When a newcomer uses our project, they will have this file and by running a `dvc pull` they will be able to acquire the dataset from a remote storage system we can set up with DVC.


Another very powerful feature of DVC is the ability to define version-controllable workflows, called pipelines.


For example, we can define a pipeline for normalizing and cleaning our data, a pipeline for training our model, etc.


This is done by creating stages in a dvc.yaml file. It will look something like this:



As you can see the various pipelines define a series of dependencies, the command to be executed, and the output of that command.


DVC is smart about detecting when you should re-run a pipeline because a dependency has changed and also tracks the outputs of your pipelines.


When it comes to reproducing a stage like data normalization/cleaning, it's as simple as running:



One additional cool detail is if we define our dependencies and outputs carefully, we get a fully-formed pipeline execution graph (or a DAG rather).


Therefore, if we run the train-random-forest stage, DVC can detect if a stage or dependency earlier in the DAG has changed, thereby re-running that as necessary before executing our stage of interest. Very cool!


For full instructions on how to set up a stage, check out this page.


One final point of discussion is about making our deployment process easy to reproduce. Here we are really starting to talk about continuous deployment, a close cousin to CI.


Until now, we have deployed our model locally and interacted with our server in that fashion.


If we want to scale up our application however (what happens if we have 1000+ users), we need to find a way to deploy our model on a remote server through a cloud-provider that can autoscale as per our usage.


We won't go the full way of setting up an autoscaling solution with a cloud-based web application. However, we will describe an important initial step which is creating a Docker image with our model that can be easily run on a virtual machine.


Again we will define an appropriate Dockerfile as follows:



This image builds off the FastAPI web application base image and simply embeds our model checkpoint into it.


Now we can execute our containerized application as follows:



With this setup in place, we can easily deploy our model on any remote server that supports Docker. This is where we transition to really building scalable machine learning-based applications.


We could go a step further and create an image for handling training workflows, which would enable us to easily scale up training jobs on remote servers, but that is left as an exercise to the reader.


And with that we have completed our whirlwind tour through building a complete machine learning project from scratch.


As a recap, we've touched on a number of different concepts in these posts:


1. Defining your problem

2. Exploring and understanding your data

3. Using data insights to build initial models

4. Analyzing your model behavior and errors

5. Iterating on new models

6. Deploying our models so that we can get real user behavior data

7. Making our model development process scalable and robust


We've come a long way!


There's still plenty more to do here, but hopefully this series has given you a snapshot of all the moving parts we need to get right for building a machine learning-powered application.


If you have any questions, don't hesitate to reach out.


Reproduced with permission from this post.

555 views0 comments
bottom of page