AI has recently seen a huge increase in demand and use these past two years. With the release of programs such as Chat-GPT and Dall-E from Open-AI available for free in your browser, it has become increasingly easy to play around with these tools, which can lead to some pretty impressive results from intelligent-sounding essays to award-winning pieces of art
How do developers train the AI to perform such complex tasks? It turns out, training an AI is similar to training a toddler.
The spike in AI research has been driven by the rapid increase in the power of graphical processing units (GPUs) and heavy investment. Nvidia, a leader in manufacturing computer processing units and AI research, and Open-AI, an AI research company, have released a number of AI technologies and software, from self-driving cars that use computer vision to differentiate between the road and obstacles to AI that generates 3D models from images.
One important type of AI takes inspiration from the structure of the human brain, with interconnected layers of nodes called neurons that process information using a set of weights that are adjusted during the training. This is what is known as a neural network, and they are particularly good at recognizing patterns, which is why they are used in programs like image generators, where large quantities of images have to be processed to create a realistic output.
There are many types of methods used to train a neural network to recognize those patterns, each having its own pros and cons, depending on its use case.
One training method is known as supervised learning. This is when you use pre-labelled data to train the neural network. For example, to create an AI that can transform text into speech, you would first write a transcript of speech which you then read out loud — this is the pre-labelled data. The neural network then compares the transcript with the audio recording you are providing so that it can map the sounds with the alphabetical characters. Like with any Neural network the more information you give it, the better the result will be. An image classifier works the same way, being trained on a set of pre-labelled images with a description of the image.
But labels aren’t always needed or possible. In unsupervised learning, you might feed a neural network a dataset of images without any labels and ask it to identify patterns and features within the images without any external context. This type of learning is useful for tasks such as clustering, where the goal is to group similar images together, or when the goal is to identify the most important features in an image.
Semi-supervised learning uses a combination of labelled and unlabelled data to train the AI. An example of this would be an image AI that is trained on a dataset of labelled images and then used to label new images.
Once an AI has been trained, it can be further fine-tuned to improve performance. This is known as transfer learning. Chat-GPT, the question-answering wizard currently making English teachers worry about marking robot’s papers, was first based on a pre-trained language model called GPT-3 which was then fine-tuned to improve its ability on tasks such as text generation, language translation, and text summarization.
The AI can also learn a lot like students in real life. In active learning, the AI is given data without a label and tries to guess what the label is. After its guess, it is sometimes provided with a correct label. This type of learning is useful when the labelled data is scarce and expensive to obtain. An example of this would be a medical diagnostic AI that is trained on a small dataset of labelled medical images and is then given new images to diagnose without the labels. The AI makes a diagnosis and the results are compared to the actual diagnosis made by a medical expert. Over time, the AI’s diagnoses become more accurate as it learns from the feedback provided by the medical expert.
Another way that AI is trained in a way that is similar to human training is reinforcement learning, in which the AI is given a reward for good results. An example of this would be a chess bot that is trained by playing against other chess bots and receiving a reward signal based on the outcome of each game. The AI learns to make decisions that lead to a higher reward, and over time becomes better at playing chess.
As fast as these bots are learning now, the pace is expected to pick up even more, as major tech companies bet big on the groundbreaking tech. Google, Meta and Microsoft have all announced that they are jumping on board, investing billions, hoping not to be left behind.
Headline image is an upscaled version from futurity.org
0 comments on “How does AI learn? A lot like we do, it turns out.”