Granthaalayah
MACHINE LEARNING: AN OVERVIEW

MACHINE LEARNING: AN OVERVIEW


Department of Computer Engineering, Institute of Engineering and Technology, Devi Ahilya University, Indore, India
Department of Applied Sciences, Institute of Engineering and Technology, Devi Ahilya University, Indore, India

How to cite this article (APA): Trisal, A, & Mandloi, DD (2021). Machine learning: an overview. International Journal of Research - GRANTHAALAYAH, 9(7), 343. doi: 10.29121/granthaalayah.v9.i7.2021.4120

Abstract

Given the tremendous availability of data and computer power, there is a resurgence of interest in using data driven machine learning methods to solve issues where traditional engineering solutions are hampered by modeling or algorithmic flaws. The purpose of this article is to provide a comprehensive review of machine learning, including its history, types, applications, limitations and future prospects. In addition to this, the article also discusses the main point of difference between the field of artificial intelligence and machine learning.

Keywords

Algorithms, Artificial Intelligence, Machine learning, Reinforcement Learning, Supervised Learning, Unsupervised Learning

INTRODUCTION

Machine learning (often abbreviated as ML) can be defined as the study of computer algorithms that ameliorate automatically through experience and by the advent of data (Mitchell, 1997). It is often seen as the subset of artificial intelligence. A model is built with the help of machine learning algorithms and is based on training data set also known as the sample data. The end goal of this model is to make predictions devoid of any external explicit programming. ML has its applications in almost all the sectors. Medicine, email filtering, speech recognition, image recognition and computer vision being a few. It is basically used in places where it is impractical to use conventional algorithms (Hu, Niu, Carrasco, Lennox, & Arvin, 2020). Machine learning is the process of computers figuring out how to perform tasks without being specifically trained to do so. It involves computers analyzing and learning trends from data. To perform simple tasks it is advisable to program algorithms which specifically tell the computer how to perform a task, in this process there is no learning done by the computer itself. For more complex tasks, it can be strenuous for a human to manually generate the required algorithms. In practicality, the approach of aiding the machine in developing its own logic and algorithm is more efficacious than the traditional approach of being programmed explicitly at every step (Alpaydin, 2020).

MACHINE LEARNING THROUGH AGES

The expression, machine learning was devised by Arthur Samuel in the year of 1959 (an American innovator in the field of computer gaming and artificial intelligence (Samuel, 1959) (Kohavi & Provost, 1998). In 1960s Nilsson in his book took this idea further and toyed with the idea of pattern classification as an application of ML (Nilsson, 1965). The idea of pattern recognition was so significant that it continued inspiring the Ml pioneers in 1970s (Duda & Hart, 1973). Further adding to these advancements, in the year 1981, a report was presented on using teaching approaches to facilitate a neural network to recognize 40 characters that included twenty-six letters, ten digits and four special symbols from a computer terminal (Bozinovski, 1981). Later, Tom M. Mitchell stipulated a extensively acknowledged definition of the machine learning algorithms: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." (Mitchell, 1997). The ‘tasks’ defined in his definition proposes an operational definition rather than elucidating the field in cognitive and practical terms. This shadows Alan Turing's proposition in his paper "Computing Machinery and Intelligence” (Harnad, 2008). The present day ML has two primary purposes, the first one being to segregate data and the second one is to get the models to predict the future results.

TYPES OF MACHINE LEARNING

There are various methods to train machine learning algorithms, each with their own characteristic merits and demerits. Machine learning comprises of two variants of data-labeled and unlabeled data. Labeled data comprises of both the input and output parameters in a machine-understandable form. Unlabeled data on the other hand has one or none of the parameters in a machine-understandable forma. All though there are types of machine learning algorithms that are used in very specific use-cases, but they can be broadly classified into 3 categories: supervised, unsupervised and reinforcement learning.

Supervised Learning: Supervised learning algorithms shape a model of the given dataset, which contains both the inputs, and the corresponding labeled outputs (Russell & Norvig, 2010). The dataset comprises of training examples that have input mapped with the corresponding output. In the mathematical model, each training example is characterized by an array known as the feature vector, and the training data is embodied by a matrix Through iterative process supervised learning algorithms formulates a function, which can be used to predict the output of the new inputs (Mehryar, Afshin, Ameet, & Talwalkar, 2012). An algorithm that increases the accuracy of its predictions gradually over time has learned to perform that task (Mitchell, 1997). Supervised learning can be further bifurcated into active learning, classification and regression (Alpaydin, 2010). Classification algorithms are used when the outputs are a discrete set of values, and regression algorithms are used when the outputs conforms to a continuous range or function.

Unsupervised Learning: Unsupervised learning algorithms presume a set of data that comprises nothing but the inputs, and find a pattern in the data, for instance grouping or clustering of data points. In this type of learning, the algorithms learn from unlabeled test data. An Unsupervised learning algorithm does not respond to feedback, rather it works by identifying commonalties. It is used for anomaly detection, clustering and finding the probability density function. (Jordan, Bishop, & Christopher, 2004)

Reinforcement Learning: Reinforcement learning is a field of machine learning involved with software agents and their action taking process pertaining an environment where the objective is to maximize the reward. Reinforcement learning forms a part of many disciplines like game theory, simulation based intelligence, genetic algorithms etc. (, 2021)

The environment is generally denoted as a Markov decision process. For reinforcement learning, dynamic programming is utilized (Otterlo & Wiering, 2012). Reinforcement learning algorithms are used when exact models are not feasible. These algorithms are used in automatic cars as well.

APPLICATIONS

Machine learning algorithms play a vital role in the conditions where development is a necessity deployment. The ever-changing nature of machine learning solutions is one of the core reasons for its global popularity. These algorithms are versatile in nature and can be used as a substitute for some human activities. The best example of this is the fact that customer services executives are being replaced by chatbots which work by the advent of natural language processing. These chatbots work by analyzing customer queries and provide support for the same without human intervention. Large tech giants like Facebook, Netflix, Google, and Amazon deploy recommendation systems that aids in providing exclusive and personalized content to individual users based on their preferences. Facebook uses a system of recommendation engines for advertisements and news feeds. Netflix accumulates data (information) of its account holders and suggest variety of movies and shows based on the predilection of the user. Google on the other hand, uses machine learning to structurize its results and is also used for YouTube’s recommendation system.

LIMITATIONS

Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results (, 2016) (, 2018). There are various reasons for this: a lack of (appropriate) data, an unavailability of data, data skew, confidentiality issues, poorly designed goals and algorithms, incorrect tools and personnel, insufficient resources, and evaluation problems (, 2018). In 2018, an Uber automatic car failed to identify a person, and the walker was killed as a result of the incident (, 2018). Even after years of dedication and investments of billions of dollars, IBM Watson's endeavors to leverage machine learning in medicine were thwarted (, 2018). Machine learning has been used to update evidence in relation to systematic reviews and increased reviewer workload as a direct outcome of biomedical literature. While it has advanced with the aid of training sets, it has not yet progressed to the point where it can minimize strain without compromising the required precision for the discoveries.

AI vs ML

ML progresses to have applications in different fields as well as use-cases, it has become more imperative to know how to distinguish between artificial intelligence and machine learning. The terminology ‘artificial intelligence’ has been used as a generalized term to denote technology, which is capable of human like cognitive skills. ML on the other hand, is a subset of Artificial intelligence pertaining to algorithms that can improve themselves without being explicitly programmed to do so. Machine learning includes deep learning. Deep learning has neural networks as their processing units, which are a type of algorithms that mimic neurons present in the human brain.

FUTURE PROSPECTS

Machine Learning is based on the ideas of computer algorithms that learn through trials and discoveries in a proactive manner. It is a form of Artificial Intelligence that allows program applications to predict outcomes with high precision. It differentiates between building computer programs and aiding machines in memorizing without human involvement. Machine learning's future looks extremely promising. Machine learning applications are being used in practically every mainstream domain. Medicine, search engines, social media marketing, and academia, to name a few, are all significant consumers of this technology. It appears effectively unattainable to find a domain devoid of this technology. From a small enterprise to a MNC the jobs which are presently being done manually shall be automated in the future. Machine learning is known for almost every recent trend and pattern observed in the literary circles, according to Gartner, the world's largest research, advisory, and consultatory institution. ML has the potential to revolutionize our lives in ways that were previously unthinkable. Machine learning is known for every recent trend and pattern seen in the literary circles, according to Gartner, the world's largest research, advisory, and consultatory institution, and rightfully so. Machine learning has the potential to revolutionize our lives in ways that were previously unthinkable. Computational rationalization and emerging Ml algorithms have reached a fundamental turning point and will gradually develop and upgrade for all intents and purposes. Developing advanced intelligent systems that learn, adapt, and even perhaps act on their own rather than simply following predetermined instructions is a watershed moment for innovators and technology providers. Individuals have sought to create a machine that behaves and performs all activities in the same way that a human does.  As a consequence, Machine Learning has become AI's ultimate blessing to the mankind in terms of achieving the goals. Machine learning has continued to invade new territories with unprecedented fervor. self-driving cars, computerized assistants,  robots, and green infrastructure have recently proved that smart machines are achievable and could deliver appealing benefits. Numerous industrial sectors, including commerce, manufacturing, construction, finance, health care, media, and engineering, have been revolutionized by simulated intelligence modeled after the human brain.

Creative Commons Licence This work is licensed under a: Creative Commons Attribution 4.0 International License

© Granthaalayah 2014-2021. All Rights Reserved.