by xoxo Holla
Machine Learning(ML), as an area of focus in computer and data science, has a lot of jargon that can send you down rabbit holes. It’s definitely happened to me! So, with that in mind, I’m creating a directory of common terms, resources for extra learning, open source libraries that I’ve used and for what purpose, and my two cents on whether it’s worth diving down that rabbit hole to know more. This post will most definitely be updated as I learn throughout my mentorship and beyond.
I don’t have a lot of time to read: check out David Fumo’s intro to different ML algorithms here.
I have _some_ time to read:
I’ve got all the time to learn at my own pace:
Supervised Learning- Fancy definition:
Supervised learning is usually mapping a set of inputs to an expectation of an output. How the inputs impact the output is determined by different methods (algorithms) depending on the data or your strategy.
My hot take:
I’m providing source data for my algorithm to train on, and I have a fair idea of how different parts of my data are related or interact, or a specific idea of what I want my result to be.
Unsupervised Learning- Fancy definition:
Unsupervised learning is used to ascertain relationship between various nodes or data points without the help of extensive labelling/relationship mapping.
Training set- a small set of data that is used by supervised algorithm to determine how inputs impact output.
Model- the set of weighted inputs created by a ML algorithm that determins the output.
Corpus- in natural language processing, this is a collection of texts used to train a ML algorithm.
One Hot Encoding- a method used to denote categorical data in a binary way. More information here.
For the command line
Welcome to ze place for inspiration!