We create EXPONENTIAL TECHNOLOGY. Check us out!
Distributed learning in an on-premise cluster - A Kaggle Reinforcement Learning case
Have you tried any distributed learning algorithms? If you are just starting out in this area, I have my doubts, but if you have been on this path for a few years, you might have faced one of those models. The incredible development of the machine learning area in the last decade has not only brought a new state of the art to several problems but has also taken processing optimization and parallelization to another level. With increasingly larger models, any common machine or even a single sup
January 08, 2021
Machine Learning Reproducibility: A Kaggle Competition Use-Case
Even though Reproducibility in Machine Learning is a theme that people hear about now and then, we still see that people are practicing it only to a certain degree. Even between Kaggle [https://www.kaggle.com/] competition winners, we still see a lot of hard-to-reproduce code in Notebooks. Our goal here is to outline some reproducibility elements and how we tackled them in a recent competition. First, what reproducibility stands for in Machine Learning? During a Machine Learning project, we hav
December 16, 2020
The path to putting your ML model in production
Suppose you are a Data Scientist or Machine Learning Engineer (or another role name of this kind). You took your time to analyze your dataset, clean it, and prepare it to train your model. You then prepared many model candidates using the most recent techniques and took your time to fine-tune them. After all this extensive work, you finally created a model to be proud of. You finally finished your job. Well, unfortunately, not. If your model never goes live and is actively used, delivering value
November 24, 2020
Is it a good idea to run data centers underwater?
There are major infrastructural challenges in running large scale data centers. One is providing sufficient electric power to keep the facility running. A datacenter running tens of thousands of servers consumes roughly 10 Megawatts of power. Servers not only consume vast amounts of energy, they also generate a lot of heat. The air inside a data center will become sweltering unless you cool it down. Servers cannot function reliably in high temperatures. The cooling solution needs to be both hi
November 24, 2020
Deep Learning and the fear of frauds
Soon we might live in a world where one can never be sure that video and voice recording is real, no matter how realistic it looks and sounds. Deep learning methods are used with artificial neural networks to create what is known as deepfakes – visual and audio content that, to the naked eye, looks absolutely real. The potential uses of deepfakes are limited only by the imagination of people who have access to the technology required to manufacture them. As technology advances, the tools for cr
November 09, 2020
Are you ready for your Neuralink Brain Implant?
Human brain consumes incredibly low amounts of energy, has great longevity, and requires little maintenance. No man-made computing hardware comes even close to a human brain in these qualities. Right now, the brain is mostly a black box to us. We know verry little about how it works. As of yet, scientists have had a limited toolset to study the inner workings of a human brain. Devices that interface with the brain could help us better understand it, repair, and possibly improve it. The most pr
October 31, 2020
The Current State of Self-Driving Technology
The advances of technology in our world have continued to increase and self-driving cars are the next logical step for our society. This poses a fundamental question: do we know how safe autonomous vehicles can be? SAE International is an organization who describes a categorization for "levels of driving automation". It defines six levels of automation for cars, ranging from Level 0 (no driving automation) to Level 5 (full automation), transitioning gradually from "driver support features" to "
October 30, 2020
What Kind of AI Infrastructure is Best for my Business?
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about the various infrastructure alternatives available when it comes to training and deployment of Artificial Intelligence models. Between Cloud, Colocation, and On-Premise which one would you say is the best infrastructure for your AI needs? Come join us and participate by asking questions or giving your opinion in the live chat. - Adriano Marques is the founder and CEO of Exponen
September 30, 2020
The Eight Challenges You'll Face With On-Premise Artificial Intelligence
As glamorous as it is to have your own Artificial Intelligence Optimized On-Premise Data Center, it doesn't come easy. It is absolutelly true that if done right it boasts much better performance and much lower costs than resorting to the cloud or even using co-location to perform your processing workload when creating AI driven solutions. However, most people are not aware of what really makes an AI Optimized Data Center and end up building an expensive half-baked solution that can't perform o
September 18, 2020
Is the cloud moving to the sea?
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about Microsoft's Project codenamed Natick which was an attempt of sustainably running a Data Center underwater. After 2 years of deployment, Microsoft is claiming that their servers were up to 8 times more reliable than their counterparts on land. Is that an indication that the cloud is actually moving to the sea? Come join us and participate by asking questions or giving your opini
September 16, 2020
How safe are self-driving cars?
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about the safety of current self-driving technology. Despite the fact that there are very few cars on the road today capable of self-driving in all conditions, most new cars are already incorporating some of that technology and soon we'll be trusting our lives to it. Whether or not you're ready give up on your steering wheel we think that you'll enjoy this discussion. Come join us an
September 09, 2020
A few potential uses (and misuses) for GPT-3
GPT-3 is a 175 billion parameter autoregressive language model by OpenAI released in May 2020. It is a deep learning system that takes input in the form of human readable language and produces human readable output. The OpenAI team tested GPT-3 in Few-shot learning mode. The model is given a verbal description of the task and a few examples of context and completion at inference time. Then the machine is given another instance of context and expected to provide the completion. Two variations of
August 11, 2020