This wiki is dedicated to diving deep into a wide variety of Computer Vision, Machine Learning, and Computer Science concepts, heuristics, building blocks, etc. The list of the wiki's pages is ever-growing, but we keep it structured so you experience no pain navigating around.

Below, we have prepared a summary description of every section for you to get a sneak peek of what to expect.

Please use the menu on the left-hand side to navigate through the pages, sub-pages, and sections. The menu on the right-hand can be used to jump across the sections of the current page.

Computer Vision is a complex field with many layers one needs to know about to try working in it. This section taps some critical aspects of Machine Learning and Computer Vision that are vital for grasping the basics of vision AI. If you want to learn about basic concepts widely used in CV, please check this section out.

When building an ML solution, one must have a clear vision of what should be the result. Computer Vision tasks are the core pillars that answer such a question. You can classify images, detect objects, build segmentation maps, segment objects, and many more. If you need help with the direction and approach you should choose with your case, please start by exploring the potential CV tasks.

Most of the fancy Machine Learning papers published are about model architectures. The range of model architectures out there is endless. Still, it is essential to grasp this field and have comprehensive knowledge of the most commonly used solutions and architecture families. To better understand some specific architecture, please check the corresponding page.

Is it SOTA? Machine Learning metrics are used to measure the performance of a model. Each Computer Vision task and use case requires different evaluation metrics. This section discusses which metric you should use for specific use cases and provides benchmark values, an intuition behind each metric, its formula, variations, simple examples, and small code snippets to calculate them. Please take a look at the corresponding page to get a comprehensive overview of a specific metric.

Loss is the number Data Scientists always want to see going down and, ideally, converging to zero. The loss function is the target function a neural network minimizes. Unfortunately, there is no unified way to calculate loss, so different loss functions are suitable for various use cases. The wiki explores loss functions, their application, and computational approaches. Feel free to explore the potential options or return to this section when you start building your own model.

Solvers, also called optimizers, are the algorithms that navigate you through the loss landscape and converge to the minimal loss of your model. If you want to understand the most popular options, dive deeper into their hyperparameters, and understand how everything works, please explore this section from top to bottom.

Neural networks are flexible and have a list of adjustable hyperparameters you should consider before jumping to training itself. However, the training process has its own hyperparameters as well. Please explore the section's page to understand which hyperparameters might significantly influence the training process.

A scheduler is an algorithm controlling the optimizer ramp-up. Concretely, it can modulate the learning rate of the optimizer during training. Check it out to ensure your optimizer works as intended and is doomed to success.

Data diversity is one of the foundations of the future model's success. However, sometimes, it is nearly impossible to collect many distinct data samples. Therefore, Data Scientists use augmentation techniques that allow developers to bring the much-needed diversity to the data. Plenty of various augmentation techniques exist, so this section explores them in detail with attention to their application and parameters. Please refer to the pages themselves to learn more.

The model's lifecycle does not end when the model is trained. When developers are done on the building side, they release it to a production environment to further test and tune. Technically, model deployment is not necessarily a Data Scientist's task, but as technical personas, developers need to have a basic grasp of the deployment process, techniques, and challenges. Therefore, please check the Deployment section to learn about these yourself.

When preparing for model training, you must form training, validation, and test sets from your data to effectively train your model and accurately evaluate its performance on unseen data. There are various ways to do so, so this section covers potential approaches, their benefits, and drawbacks. Please check it out to make sure your data is prepared correctly.

Accelerated Annotation.
Maximize model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified