Human brain consumes incredibly low amounts of energy, has great longevity, and requires little maintenance. No man-made computing hardware comes even close to a human brain in these qualities. Right now, the brain is mostly a black box to us. We know verry little about how it works. As of yet, scientists have had a limited toolset to study the inner workings of a human brain. Devices that interface with the brain could help us better understand it, repair, and possibly improve it. The most pr
The advances of technology in our world have continued to increase and self-driving cars are the next logical step for our society. This poses a fundamental question: do we know how safe autonomous vehicles can be? SAE International is an organization who describes a categorization for "levels of driving automation". It defines six levels of automation for cars, ranging from Level 0 (no driving automation) to Level 5 (full automation), transitioning gradually from "driver support features" to "
With the constant advancements of genetic engineering, a common concern is to be able to identify the lab-of-origin of genetically engineered DNA sequences. For that reason, AltLabs has hosted the Genetic Engineering Attribution Challenge to gather many teams to propose new tools to solve this problem. Here we show our proposed method that aims to rank the most likely labs-of-origin and generate embeddings for DNA sequences and labs. These embeddings can also be used to perform various other tas
In recent years, the term Artificial Intelligence has gained strength and together with it have emerged some professions such as Data Scientist and Machine Learning Engineer. Knowing and applying machine learning is attractive and appears to be the path to success. However this path can be troubled and especially discouraging for those who are just starting out. Over the years working as a Data Scientist and Machine Learning Researcher, I have witnessed several common mistakes that made life di
With cloud offerings becoming more abundant and diverse, cloud infrastructure seems to offer a much cheaper and simpler alternative to an on-premises data center. Many organizations, that need Artificial Intelligence to help with decision-making, problem-solving, etc. face a complicated decision: what is the best infrastructure deployment for AI workloads? Generally speaking, there are three possible deployment options. You can run your AI on-premises in your own datacenter, rent some space at
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about the various infrastructure alternatives available when it comes to training and deployment of Artificial Intelligence models. Between Cloud, Colocation, and On-Premise which one would you say is the best infrastructure for your AI needs? Come join us and participate by asking questions or giving your opinion in the live chat. - Adriano Marques is the founder and CEO of Exponen
As glamorous as it is to have your own Artificial Intelligence Optimized On-Premise Data Center, it doesn't come easy. It is absolutelly true that if done right it boasts much better performance and much lower costs than resorting to the cloud or even using co-location to perform your processing workload when creating AI driven solutions. However, most people are not aware of what really makes an AI Optimized Data Center and end up building an expensive half-baked solution that can't perform o
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about Microsoft's Project codenamed Natick which was an attempt of sustainably running a Data Center underwater. After 2 years of deployment, Microsoft is claiming that their servers were up to 8 times more reliable than their counterparts on land. Is that an indication that the cloud is actually moving to the sea? Come join us and participate by asking questions or giving your opini
In this week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about the safety of current self-driving technology. Despite the fact that there are very few cars on the road today capable of self-driving in all conditions, most new cars are already incorporating some of that technology and soon we'll be trusting our lives to it. Whether or not you're ready give up on your steering wheel we think that you'll enjoy this discussion. Come join us an
GPT-3 is a 175 billion parameter autoregressive language model by OpenAI released in May 2020. It is a deep learning system that takes input in the form of human readable language and produces human readable output. The OpenAI team tested GPT-3 in Few-shot learning mode. The model is given a verbal description of the task and a few examples of context and completion at inference time. Then the machine is given another instance of context and expected to provide the completion. Two variations of
In next week's Exponential Chats, some of the team members responsible for Amalgam's development will have a chat about how Artificial Intelligence is defined with some practical examples and easy tips on how to distinguish AI from everything else.