Coverstory  - ( 20/04/2025 To 26/04/2025  )

Artificial Intelligence - Changing our future

AI, or artificial intelligence, seems to be on the tip of everyone’s tongue these days. While I’ve been aware of this major trend in tech development for a while, I’ve noticed AI appearing more and more as one of the most in-demand areas of expertise for job seekers.

I’m sure that for many of us, the term “AI” conjures up sci-fi fantasies or fear about robots taking over the world. The depictions of AI in the media have run the gamut, and while no one can predict exactly how it will evolve in the future, the current trends and developments paint a much different picture of how AI will become part of our lives.

In reality, AI is already at work all around us, impacting everything from our search results, to our online dating prospects, to the way we shop. Data shows that the use of AI in many sectors of business has grown by 270% over the last four years.

But what will AI mean for the future of work? As computers and technology have evolved, this has been one of the most pressing questions. As with many technological developments throughout history, the advancement of artificial intelligence has created fears that human workers will become obsolete.

The reality is probably a lot less dire, but maybe even more complicated.

What is AI?

Before we do a deep dive on the ways in which AI will impact the future of work, it’s important to start simple: what is AI? A straightforward definition from Britannica states that artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

“AI” has become a catchall term to describe any advancements in computing, systems and technology in which computer programs can perform tasks or solve problems that require the kind of reason we associate with human intelligence, even learning from past processes.

This ability to learn is a key component of AI. Algorithms, like the dreaded Facebook algorithm that replaced all our friends with sponsored content, are often associated with AI. But there is a key distinction.

As software journalist Kaya Ismail writes, an algorithm is simply a “set of instructions,” a formula for processing data. AI takes this to another level, and can be made up of a set of algorithms that have the capacity to change and rewrite themselves in response to the data inputted, hence displaying “intelligence.”

AI will probably not make human workers obsolete, at least not for a long time. To put some of your fears to bed: the robots are probably not coming for your jobs, at least not yet. Given how artificial intelligence has been portrayed in the media, in particular in some of our favorite sci-fi movies, it’s clear that the advent of this technology has created fear that AI will one day make human beings obsolete in the workforce. After all, as technology has advanced, many tasks that were once executed by human hands have become automated. It’s only natural to fear that the leap toward creating intelligent computers could herald the beginning of the end of work as we know it.

But, I don’t think there is any reason to be so fatalistic. A recent paper published by the MIT Task Force on the Work of the Future entitled “Artificial Intelligence And The Future of Work,” looked closely at developments in AI and their relation to the world of work. The paper paints a more optimistic picture.

Rather than promoting the obsolescence of human labor, the paper predicts that AI will continue to drive massive innovation that will fuel many existing industries and could have the potential to create many new sectors for growth, ultimately leading to the creation of more jobs.

While AI has made major strides toward replicating the efficacy of human intelligence in executing certain tasks, there are still major limitations. In particular, AI programs are typically only capable of “specialized” intelligence, meaning they can solve only one problem, and execute only one task at a time. Often, they can be rigid, and unable to respond to any changes in input, or perform any “thinking” outside of their prescribed programming.

Humans, however, possess “generalized intelligence,” with the kind of problem solving, abstract thinking and critical judgement that will continue to be important in business. Human judgement will be relevant, if not in every task, then certainly throughout every level across all sectors.

There are many other factors that could limit runaway advancement in AI. AI often requires “learning” which can involve massive amounts of data, calling into question the availability of the right kind of data, and highlighting the need for categorization and issues of privacy and security around such data. There is also the limitation of computation and processing power. The cost of electricity alone to power one supercharged language model AI was estimated at $4.6 million.

Another important limitation of note is that data can itself carry bias, and be reflective of societal inequities or the implicit biases of the designers who create and input the data. If there is bias in the data that is inputted into an AI, this bias is likely to carry over to the results generated by the AI.

There has even been a bill introduced into Congress entitled the Algorithmic Accountability Act with the goal of forcing the Federal Trade Commission to investigate the use of any new AI technology for the potential to perpetuate bias.

Based on these factors and many others, the MIT CCI paper argues that we are a long way from reaching a point in which AI is comparable to human intelligence, and could theoretically replace human workers entirely. 

Provided there is investment at all levels, from education to the private sector and governmental organizations—anywhere that focuses on training and upskilling workers—AI has the potential to ultimately create more jobs, not less. The question should then become not “humans or computers” but “humans and computers” involved in complex systems that advance industry and prosperity.

This paper is a fascinating read for anyone hoping to dive deeper into AI and the many potential directions in which it may lead.

AI Is becoming standard in all businesses, not just in the world of tech. A couple times recently, AI has come up in conversation with a client or an associate, and I’m noticing a fallacy in how people are thinking about it. There seems to be a sense for many that it is a phenomenon that is only likely to have big impacts in the tech world.

In case you hadn’t noticed, the tech world is the world these days. Don’t ever forget when economist Paul Krugman said in 1998 that “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” You definitely don’t want to be behind the curve when it comes to AI.

In fact, 90% of leading businesses already have ongoing investment in AI technologies. More than half of businesses that have implemented some manner of AI-driven technology report experiencing greater productivity.

AI is likely to have a strong impact on certain sectors in particular:

Medical:

The potential benefits of utilizing AI in the field of medicine are already being explored. The medical industry has a robust amount of data, which can be utilized to create predictive models related to healthcare. Additionally, AI has shown to be more effective than physicians in certain diagnostic contexts.

Automotive:

We’re already seeing how AI is impacting the world of transportation and automobiles with the advent of autonomous vehicles and autonomous navigation. AI will also have a major impact on manufacturing, including within the automotive sector.

Cybersecurity:

Cybersecurity is front of mind for many business leaders, especially considering the spike in cybersecurity breaches throughout 2020. Attacks rose 600% during the pandemic as hackers capitalized on people working from home, on less secure technological systems and Wi-Fi networks. AI and machine learning will be critical tools in identifying and predicting threats in cybersecurity. AI will also be a crucial asset for security in the world of finance, given that it can process large amounts of data to predict and catch instances of fraud.

E-Commerce:

AI will play a pivotal role in e-commerce in the future, in every sector of the industry from user experience to marketing to fulfillment and distribution. We can expect that moving forward, AI will continue to drive e-commerce, including through the use of chat-bots, shopper personalization, image-based targeting advertising, and warehouse and inventory automation.

Language/Talking:

Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations. Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than a tenth of GPT-3s.

AI can have a big impact on the job search

If you are moving forward with the hope that a hiring manager may give you the benefit of the doubt on a small misstep within the application, you might be in for a rude awakening. AI already plays a major role in the hiring process, so much so that up to 75% of resumes are rejected by an automated applicant tracking system, or ATS, before they even reach a human being. 

In the past, recruiters have had to devote considerable time to poring over resumes to look for relevant candidates. Data from LinkedIn shows that recruiters spend on average 23 hours looking over resumes for one successful hire.

By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks. DeepMind's AlphaFold 2 (2020) demonstrated the ability to determine, in hours rather than months, the 3D structure of a protein. Facial recognition advanced to where, under some circumstances, some systems claim to have a 99% accuracy rate.

If you are tech savvy, it would be wise to dive deep and learn as much as you can about interacting in the AI space. If your skills lie elsewhere, it is important to recognize that AI will have a big impact, and to the extent of your abilities, you should try to understand the fundamentals of how it functions in different sectors.

AI is definitely here to stay, whether we like it or not. Personally, I don’t think we have anything to be afraid of. The best way to move forward is to be aware of and adapt to the new technology around us, AI included.

  

Comment: