⚙️ Bert Inner Workings
Let’s look at how an input flows through Bert.
Disclaimer: The format of this tutorial notebook is very similar to my other tutorial notebooks. This is done intentionally in order to keep readers familiar with my format.
Main idea:
I created this notebook to better understand the inner workings of Bert. I followed a lot of tutorials to try to understand the architecture, but I was never able to really understand what was happening under the hood. For me it always helps to see the actual code instead of just simple abstract diagrams that a lot of times don’t match the actual implementation. If you’re like me than this tutorial will help!
I went as deep as you can go with Deep Learning — all the way to the tensor level. For me it helps to see the code and how the tensors move between layers. I feel like this level of abstraction is close enough to the core of the model to perfectly understand the inner workings.
I will use the implementation of Bert from one of the best NLP library out there — HuggingFace Transformers. More specifically, I will show the inner working of Bert For Sequence Classification.
The term forward pass is used in Neural Networks and it refers to the calculations involved from the input sequence all the way to output of the last layer. It’s basically the flow of data from input to output.
I will follow the code from an example input sequence all the way to the final output prediction.
What should I know for this notebook?
Some prior knowledge of Bert is needed. I won’t go into any details of how Bert works. For this there is plenty of information out there.
Since I am using the PyTorch implementation of Bert any knowledge on PyTorch is very useful.
Knowing a little bit about the transformers library helps too.
How deep are we going?
I think the best way to understand such a complex model as Bert is to see the actual layer components that are used. I will dig in the code until I see the actual PyTorch layers used torch.nn
. In my opinion there is no need to go deeper than the torch.nn
layers.
Tutorial Structure
Each section contains multiple subsections.
The order of each section matches the order of the model’s layers from input to output.
At the beginning of each section of code I created a diagram to illustrate the flow of tensors of that particular code.
I created the diagrams following the model’s implementation.
The major section Bert For Sequence Classification starts with the Class Call that shows how we normally create the Bert model for sequence classification and perform a forward pass. Class Components contains the components of BertForSequenceClassification
implementation.
At the end of each major section, I assemble all components from that section and show the output and diagram.
At the end of the notebook, I have all the code parts and diagrams assembled.
Terminology
I will use regular deep learning terminology found in most Bert tutorials. I’m using some terms in a slightly different way:
- Layer and layers: In this tutorial when I mention layer it can be an abstraction of a group of layers or just a single layer. When I reach
torch.nn
you know I refer to a single layer. torch.nn
: I'm referring to any PyTorch layer module. This is the deepest I will go in this tutorial.
How to use this notebook?
The purpose of this notebook is purely educational. This notebook is to be used to align known information on how Bert woks with the code implementation of Bert. I used the Bert implementation from Transformers. My contribution is on arranging the code implementation and creating associated diagrams.
Dataset
For simplicity I will only use two sentences as our data input: I love cats!
and He hates pineapple pizza.
. I'll pretend to do binary sentiment classification on these two sentences.
Coding
Now let’s do some coding! We will go through each coding cell in the notebook and describe what it does, what’s the code, and when is relevant — show the output.
I made this format to be easy to follow if you decide to run each code cell in your own python notebook.
When I learn from a tutorial, I always try to replicate the results. I believe it’s easy to follow along if you have the code next to the explanations.
Installs
- transformers library needs to be installed to use all the awesome code from Hugging Face. To get the latest version I will install it straight from GitHub.
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
|████████████████████████████████| 2.9MB 6.7MB/s
|████████████████████████████████| 890kB 48.9MB/s
|████████████████████████████████| 1.1MB 49.0MB/s
Building wheel for transformers (PEP 517) ... done
Building wheel for sacremoses (setup.py) ... done
|████████████████████████████████| 71kB 5.2MB/s
Imports
Import all needed libraries for this notebook.
Declare parameters used for this notebook:
set_seed(123)
- Always good to set a fixed seed for reproducibility.n_labels
- How many labels are we using in this dataset. This is used to decide size of classification head.ACT2FN
- Dictionary for special activation functions used in Bert. We'll only need thegelu
activation function.BertLayerNorm
- Shortcut for calling the PyTorch normalization layertorch.nn.LayerNorm
.
Define Input
Let’s define some text data on which we will use Bert to classify as positive or negative.
We encoded our positive and negative sentiments into:
- 0 — for negative sentiments.
- 1 — for positive sentiments.
Bert Tokenizer
Creating the tokenizer
is pretty standard when using the Transformers library.
Using our newly created tokenizer
we'll use it on our two sentence dataset and create the input_sequence
that will be used as input for our Bert model.
Show Bert Tokenizer Diagram
Downloading: 100% |████████████████████████████████| 213k/213k [00:00<00:00, 278kB/s]
PRETTY PRINT OF `input_sequences` UPDATED WITH `labels`:
input_ids : tensor([[ 101, 146, 1567, 11771, 106, 102, 0, 0, 0],
[ 101, 1124, 18457, 10194, 11478, 7136, 13473, 119, 102]])
token_type_ids : tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]])
attention_mask : tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1]])
labels : tensor([1, 0])
ORIGINAL TEXT:
I love cats!
He hates pineapple pizza.
TEXT AFTER USING `BertTokenizer`:
[CLS] I love cats! [SEP] [PAD] [PAD] [PAD]
[CLS] He hates pineapple pizza. [SEP]
Bert Configuration
Predefined values specific to Bert architecture already defined for us by Hugging Face.
Downloading: 100% |████████████████████████████████| 433/433 [00:00<00:00, 15.5kB/s]
NUMBER OF LAYERS: 12
EMBEDDING SIZE: 768
ACTIVATIONS: gelu
Bert For Sequence Classification
I will go over the Bert for Sequence Classification model. This is a Bert language model with a classification layer on top.
If you plan on looking at other transformers models his tutorial will be very similar.
Class Call
Let’s start with doing a forward pass using the whole model call from Hugging Face Transformer.
Downloading: 100% |████████████████████████████████| 436M/436M [00:07<00:00, 61.3MB/s]
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ...
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
FORWARD PASS OUTPUT: SequenceClassifierOutput(loss=tensor(0.7454), logits=tensor([[ 0.2661, -0.1774],
[ 0.2223, -0.0847]]), hidden_states=None, attentions=None)
Class Components
Now let’s look at the code implementation and break down each part of the model and check the outputs.
Start with the BertForSequenceClassification
found in transformers/src/transformers/models/bert/modeling_bert.py#L1449.
The forward
pass uses the following layers:
- BertModel layer:
self.bert = BertModel(config)
- torch.nn.Dropout layer for dropout:
self.dropout = nn.Dropout(config.hidden_dropout_prob)
- torch.nn.Linear layer used for classification:
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
BertModel
This is the core Bert model that can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L815.
Hugging Face was nice enough to mention a small summary: The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
The forward
pass uses the following layers:
- BertEmbeddings layer:
self.embeddings = BertEmbeddings(config)
- BertEncoder layer:
self.encoder = BertEncoder(config)
- BertPooler layer:
self.pooler = BertPooler(config)
Bert Embeddings
This is where we feed the input_sequences
created under Bert Tokenizer and get our first embeddings.
Implementation can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L165.
This layer contains actual PyTorch layers. I won’t go into farther details since this is how far we need to go.
The forward
pass uses following layers:
- torch.nn.Embedding layer for word embeddings:
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
- torch.nn.Embedding layer for position embeddings:
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
- torch.nn.Embedding for token type embeddings:
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
- torch.nn.LayerNorm layer for normalization:
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- torch.nn.Dropout layer for dropout:
self.dropout = nn.Dropout(config.hidden_dropout_prob)
Created Tokens Positions IDs:
tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])
Tokens IDs:
torch.Size([2, 9])
Tokens Type IDs:
torch.Size([2, 9])
Word Embeddings:
torch.Size([2, 9, 768])
Position Embeddings:
torch.Size([1, 9, 768])
Token Types Embeddings:
torch.Size([2, 9, 768])
Sum Up All Embeddings:
torch.Size([2, 9, 768])
Embeddings Layer Nromalization:
torch.Size([2, 9, 768])
Embeddings Dropout Layer:
torch.Size([2, 9, 768])
Bert Encoder
This layer contains the core of the bert model where the self-attention happens.
The implementation can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L512.
The forward
pass uses:
- 12 of the BertLayer layers ( in this setup
config.num_hidden_layers=12
):
self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)])
BERT LAYER
This layer contains basic components of the self-attention implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L429.
The forward
pass uses:
- BertAttention layer:
self.attention = BertAttention(config)
- BertIntermediate layer:
self.intermediate = BertIntermediate(config)
- BertOutput layer:
self.output = BertOutput(config)
Bert Attention
This layer contains basic components of the self-attention implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L351.
The forward
pass uses:
- BertSelfAttention layer:
self.self = BertSelfAttention(config)
- BertSelfOutput layer:
self.output = BertSelfOutput(config)
BertSelfAttention
This layer contains the torch.nn
basic components of the self-attention implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L212.
The forward
pass uses:
- torch.nn.Linear used for the Query layer:
self.query = nn.Linear(config.hidden_size, self.all_head_size)
- torch.nn.Linear used for the Key layer:
self.key = nn.Linear(config.hidden_size, self.all_head_size)
- torch.nn.Linear used for the Value layer:
self.value = nn.Linear(config.hidden_size, self.all_head_size)
- torch.nn.Dropout layer for dropout:
self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
Attention Head Size:
64
Combined Attentions Head Size:
768
Hidden States:
torch.Size([2, 9, 768])
Query Linear Layer:
torch.Size([2, 9, 768])
Key Linear Layer:
torch.Size([2, 9, 768])
Value Linear Layer:
torch.Size([2, 9, 768])
Query:
torch.Size([2, 12, 9, 64])
Key:
torch.Size([2, 12, 9, 64])
Value:
torch.Size([2, 12, 9, 64])
Key Transposed:
torch.Size([2, 12, 64, 9])
Attention Scores:
torch.Size([2, 12, 9, 9])
Attention Scores Divided by Scalar:
torch.Size([2, 12, 9, 9])
Attention Probabilities Softmax Layer:
torch.Size([2, 12, 9, 9])
Attention Probabilities Dropout Layer:
torch.Size([2, 12, 9, 9])
Context:
torch.Size([2, 12, 9, 64])
Context Permute:
torch.Size([2, 9, 12, 64])
Context Reshaped:
torch.Size([2, 9, 768])
BertSelfOutput
This layer contains the torch.nn
basic components of the self-attention implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L337.
The forward
pass uses:
- torch.nn.Linear layer:
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- torch.nn.LayerNorm layer for normalization:
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- torch.nn.Dropout layer for dropout:
self.dropout = nn.Dropout(config.hidden_dropout_prob)
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Normalization Layer:
torch.Size([2, 9, 768])
Assemble BertAttention
Put together BertSelfAttention layer and BertSelfOutput layer to create the BertAttention layer.
Now perform a forward
pass using previous output layer as input.
Attention Head Size:
64
Combined Attentions Head Size:
768
Hidden States:
torch.Size([2, 9, 768])
Query Linear Layer:
torch.Size([2, 9, 768])
Key Linear Layer:
torch.Size([2, 9, 768])
Value Linear Layer:
torch.Size([2, 9, 768])
Query:
torch.Size([2, 12, 9, 64])
Key:
torch.Size([2, 12, 9, 64])
Value:
torch.Size([2, 12, 9, 64])
Key Transposed:
torch.Size([2, 12, 64, 9])
Attention Scores:
torch.Size([2, 12, 9, 9])
Attention Scores Divided by Scalar:
torch.Size([2, 12, 9, 9])
Attention Probabilities Softmax Layer:
torch.Size([2, 12, 9, 9])
Attention Probabilities Dropout Layer:
torch.Size([2, 12, 9, 9])
Context:
torch.Size([2, 12, 9, 64])
Context Permute:
torch.Size([2, 9, 12, 64])
Context Reshaped:
torch.Size([2, 9, 768])
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Normalization Layer:
torch.Size([2, 9, 768])
BertIntermediate
This layer contains the torch.nn
basic components of the Bert model implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L400.
The forward
pass uses:
- torch.nn.Linear layer:
self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 3072])
Hidden States Gelu Activation Function:
torch.Size([2, 9, 3072])
BertOutput
This layer contains the torch.nn
basic components of the Bert model implementation.
Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L415.
The forward
pass uses:
- torch.nn.Linear layer:
self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
- torch.nn.LayerNorm layer for normalization:
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- torch.nn.Dropout layer for dropout:
self.dropout = nn.Dropout(config.hidden_dropout_prob)
Hidden States:
torch.Size([2, 9, 3072])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Layer Normalization:
torch.Size([2, 9, 768])
Assemble BertLayer
Put together BertAttention layer, BertIntermediate layer and BertOutput layer to create the BertLayer layer.
Now perform a forward
pass using previous output layer as input.
Attention Head Size:
64
Combined Attentions Head Size:
768
Hidden States:
torch.Size([2, 9, 768])
Query Linear Layer:
torch.Size([2, 9, 768])
Key Linear Layer:
torch.Size([2, 9, 768])
Value Linear Layer:
torch.Size([2, 9, 768])
Query:
torch.Size([2, 12, 9, 64])
Key:
torch.Size([2, 12, 9, 64])
Value:
torch.Size([2, 12, 9, 64])
Key Transposed:
torch.Size([2, 12, 64, 9])
Attention Scores:
torch.Size([2, 12, 9, 9])
Attention Scores Divided by Scalar:
torch.Size([2, 12, 9, 9])
Attention Probabilities Softmax Layer:
torch.Size([2, 12, 9, 9])
Attention Probabilities Dropout Layer:
torch.Size([2, 12, 9, 9])
Context:
torch.Size([2, 12, 9, 64])
Context Permute:
torch.Size([2, 9, 12, 64])
Context Reshaped:
torch.Size([2, 9, 768])
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Normalization Layer:
torch.Size([2, 9, 768])
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 3072])
Hidden States Gelu Activation Function:
torch.Size([2, 9, 3072])
Hidden States:
torch.Size([2, 9, 3072])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Layer Normalization:
torch.Size([2, 9, 768])
Assemble BertEncoder
Put together 12 of the BertLayer layers ( in this setup config.num_hidden_layers=12
) to create the BertEncoder layer.
Now perform a forward
pass using previous output layer as input.
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
----------------- BERT LAYER 1 -----------------
Hidden States:
torch.Size([2, 9, 768])
Query Linear Layer:
torch.Size([2, 9, 768])
Key Linear Layer:
torch.Size([2, 9, 768])
Value Linear Layer:
torch.Size([2, 9, 768])
Query:
torch.Size([2, 12, 9, 64])
Key:
torch.Size([2, 12, 9, 64])
Value:
torch.Size([2, 12, 9, 64])
Key Transposed:
torch.Size([2, 12, 64, 9])
Attention Scores:
torch.Size([2, 12, 9, 9])
Attention Scores Divided by Scalar:
torch.Size([2, 12, 9, 9])
Attention Probabilities Softmax Layer:
torch.Size([2, 12, 9, 9])
Attention Probabilities Dropout Layer:
torch.Size([2, 12, 9, 9])
Context:
torch.Size([2, 12, 9, 64])
Context Permute:
torch.Size([2, 9, 12, 64])
Context Reshaped:
torch.Size([2, 9, 768])
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Normalization Layer:
torch.Size([2, 9, 768])
Hidden States:
torch.Size([2, 9, 768])
Hidden States Linear Layer:
torch.Size([2, 9, 3072])
Hidden States Gelu Activation Function:
torch.Size([2, 9, 3072])
Hidden States:
torch.Size([2, 9, 3072])
Hidden States Linear Layer:
torch.Size([2, 9, 768])
Hidden States Dropout Layer:
torch.Size([2, 9, 768])
Hidden States Layer Normalization:
torch.Size([2, 9, 768])
----------------- BERT LAYER 2 -----------------
...
----------------- BERT LAYER 12 -----------------
BertPooler
This layer contains the core of the bert model where the self-attention happens.
The implementation can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L601.
The forward
pass uses:
- torch.nn.Linear layer:
self.dense = torch.nn.Linear(config.hidden_size, config.hidden_size)
- torch.nn.Tanh activation function layer:
self.activation = torch.nn.Tanh()
Hidden States:
torch.Size([2, 9, 768])
First Token [CLS]:
torch.Size([2, 768])
First Token [CLS] Linear Layer:
torch.Size([2, 768])
First Token [CLS] Tanh Activation Function:
torch.Size([2, 768])
Assemble BertModel
Put together BertEmbeddings layer, BertEncoder layer and BertPooler layer to create the BertModel layer.
Now perform a forward
pass using previous output layer as input.
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Created Tokens Positions IDs:
tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])
Tokens IDs:
torch.Size([2, 9])
Tokens Type IDs:
torch.Size([2, 9])
Word Embeddings:
torch.Size([2, 9, 768])
Position Embeddings:
torch.Size([1, 9, 768])
Token Types Embeddings:
torch.Size([2, 9, 768])
Sum Up All Embeddings:
torch.Size([2, 9, 768])
Embeddings Layer Nromalization:
torch.Size([2, 9, 768])
Embeddings Dropout Layer:
torch.Size([2, 9, 768])
----------------- BERT LAYER 1 -----------------
...
----------------- BERT LAYER 12 -----------------
...
Hidden States:
torch.Size([2, 9, 768])
First Token [CLS]:
torch.Size([2, 768])
First Token [CLS] Linear Layer:
torch.Size([2, 768])
First Token [CLS] Tanh Activation Function:
torch.Size([2, 768])
Assemble Components
Put together BertModel layer, torch.nn.Dropout layer and torch.nn.Linear layer to create the BertForSequenceClassification model.
Now perform a forward
pass using previous output layer as input.
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Attention Head Size:
64
Combined Attentions Head Size:
768
Created Tokens Positions IDs:
tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])
Tokens IDs:
torch.Size([2, 9])
Tokens Type IDs:
torch.Size([2, 9])
Word Embeddings:
torch.Size([2, 9, 768])
Position Embeddings:
torch.Size([1, 9, 768])
Token Types Embeddings:
torch.Size([2, 9, 768])
Sum Up All Embeddings:
torch.Size([2, 9, 768])
Embeddings Layer Nromalization:
torch.Size([2, 9, 768])
Embeddings Dropout Layer:
torch.Size([2, 9, 768])
----------------- BERT LAYER 1 -----------------
...
----------------- BERT LAYER 12 -----------------
...
Hidden States:
torch.Size([2, 9, 768])
First Token [CLS]:
torch.Size([2, 768])
First Token [CLS] Linear Layer:
torch.Size([2, 768])
First Token [CLS] Tanh Activation Function:
torch.Size([2, 768])
Complete Diagram
- If you want a .pdf version of this diagram: bert_inner_workings.pdf.
- If you want a .png version of this diagram: bert_inner_workings.png.
Final Note
If you made it this far Congrats! 🎊 and Thank you! 🙏 for your interest in my tutorial!
I’ve been using this code for a while now and I feel it got to a point where is nicely documented and easy to follow.
Of course is easy for me to follow because I built it. That is why any feedback is welcome and it helps me improve my future tutorials!
If you see something wrong please let me know by opening an issue on my ml_things GitHub repository!
A lot of tutorials out there are mostly a one-time thing and are not being maintained. I plan on keeping my tutorials up to date as much as I can.
Contact 🎣
🦊 GitHub: gmihaila
🌐 Website: gmihaila.github.io
👔 LinkedIn: mihailageorge
📬 Email: georgemihaila@my.unt.edu.com
Originally published at https://gmihaila.github.io.