In the process of applying for a job, the interview has always been a deciding factor. It answers critical questions. What proficiencies does this candidate bring to the table? Does this candidate seem to be a good fit within the community? Interviews are used to assess the decision making, technical and people skills of individuals, judging whether their abilities and traits match the culture of their company. However, this procedure may soon change.

The current labor market is becoming increasingly competitive. Despite the overall benefits of decreasing unemployment rates, lower unemployment rates lead to less qualified candidates on the hunt for jobs and thus more selective job applications. Additionally, the development of AI and other innovations has made spots in industries such as human resources, finance and labor-intensive roles harder to find. On top of technology, a recent trend in job exploration—where employees aspire to explore their personal interests and shift jobs throughout their career—results in lower company retainment and a more competitive job application process.

The competitive job market combined with the lower cost of technology has catalyzed the emergence of a market for job interview robots. One of the most prominent robots in the industry at the moment is a Swedish robot named Tengai. A long anticipated project by Furhat Robotics, Tengai finally launched last October in a collaboration with renowned recruitment firm TNG. In an interview with BBC, TNG’s chief innovation officer Elin Öberg Mårtenzon explained that the firm’s motives for the dramatic shift in interview methodology arose from a desire to “challenge” the effects of human judgement in the hiring process. Likewise, in recruiting firms throughout Silicon Valley, robots are being utilized to score applicants in an interview with a preprogrammed algorithm. However, technology isn’t perfect, and these robots bring about a multitude of conflicting opinions.

On the one hand, supporters of robot interviews believe that the consistency of the machine allows for a fair judgement of individual candidates. In his book Beauty Pays: Why Attractive People Are More Successful, economist Daniel Hamermesh argues that society favors the attractive; they tend to have higher employment rates and better pay. Interview robots could even the playing field. They are designed to judge the personal qualities without factoring in the bias of physical qualities. Not only do the robots ignore attractiveness, but they also uproot the common homogeny that occurs within companies by taking away the innate human tendency for interviewers to connect with and hire people who are similar to them. Interview robots do this by using algorithms that uncover the authentic traits that make a candidate qualified for a job, analyzing everything from speech tendencies to smile frequency. The robots are designed to be rid of human opinions, creating a more diverse job applicant pool that searches beyond the surface level traits that human employers may be more biased towards, such as the prestige of a candidate’s university education or his/her familial background.

On the other hand, these algorithms sometimes incorrectly account for certain traits. In 2017, Amazon shut down a job recruiting engine project as it discriminated against women. The system was given past resumes to observe patterns within the company for future recruitment, but that faulty system only exacerbated the discrimination against individuals differing from the mainstream worker, namely women. Eventually, the project was shut down, and Amazon distanced itself from the project, claiming that they never relied completely on those computer generated rankings when making the final hiring process.

Further criticism of robots in the job hiring process stems from complaints that the interview is becoming dehumanized and methodical. There is now an exact right or wrong formula judging every word coming out of a candidate’s mouth, with no social cues of feedback from the interviewer on whether the conversation is going well. The interview becomes very one-sided, as the robots also can’t answer personal questions for the candidate to formulate a better understanding of the company. This is crucial because while interviews are mainly conducted to assess candidates’ qualifications for the job, they also give the candidates information to decide if they see themselves as a good fit for the job. Talking to your own reflection in a webcam or a robot’s camera eyes just doesn’t rub off the same way. The current robots in the market are not refined enough to have humanistic behaviors or be programmed specifically for individual companies.

As of now, technology does not seem advanced enough for interview robots to replace traditional human recruiters. While AI can be useful in terms of efficiency and simply getting candidates through the door, the algorithm is not precise enough to make the final decision. A balance of interview robots, for filtering out qualified candidates, and human recruiters, for deciding company fit, seems to be the most popular mix of automated recruiting; a majority of Fortune 500 have shifted to using technology in at least one step of their hiring process. However, with constantly evolving technology, in a few decades, the job hiring process could be shifted entirely to put applicants face to face with robots.

Artificial intelligence—commonly referred to as A.I—has risen in prominence in the last decade. One of the industries A.I may have a particularly significant impact on is healthcare; experts believe that because A.I can sift through large swaths of data and detect patterns within them, it is only a matter of time before it finds a home in medicine. Naturally, this concerning for the labor market. Many health care employees are anxious about being replaced by computer automation. However, while artificial intelligence may cause short term job loss, it will also force a shift in the health industry toward empathetic patient care, which will generate future jobs and entail numerous other economic benefits.

Artificial intelligence could increase job loss. General practitioners’ (GPs) ability to sort through patients’ medical data and diagnose illnesses pales in comparison to the speed, efficiency and accuracy with which computers can perform the same task. Beyond this profession, the most at-risk jobs lie in radiology, dermatology and pathology, whose work is predicated even more-so on data and pattern recognition. Nick Bryan, the department chair of diagnostic medicine at the University of Texas at Austin, argued in a interview, “I predict that within 10 years no medical imaging study will be reviewed by a radiologist until it has been pre-analyzed by a machine.”

Yet, in many of these cases, job loss is not a foregone conclusion. Bryan and his colleague, Michael Recht, wrote an article for the Journal of the American College of Radiology which stated, “We believe that machine learning and AI will enhance both the value and the professional satisfaction of radiologists,” rather than supplant radiologists entirely, because AI will allow “us to spend more time performing functions that add value and influence patient care and less time doing rote tasks that we neither enjoy nor perform as well as machines.”

The market for empathetic, “people-focused” practitioners is large, which will protect the medical labor market even as A.I grows. The idea of the need for truly human-centric care has seeped into the health sphere in recent years as research has developed discussing the health benefits derived from genuine patient-clinician interaction. In his book, Emotional Intelligence, Daniel Goleman argues “many patients can benefit measurably when their psychological needs are attended to along with their purely medical ones.” To support his claim, he cites a study performed on elderly patients with hip fracture at the Mt. Sinai School of Medicine. The study concluded “patients who received therapy for depression in addition to normal orthopedic care left the hospital an average of two days earlier; total savings for the hundred or so patients was $97,361 in medical costs.”

While it is clear the market for empathetic care exists, it remains unfilled. In an article published by the National Center for Biotechnology Information, titled “Time Allocation in Primary Care Office Visits,” the average duration of office visits in the United States hovers at about 17.4 minutes, with the median at about 15.7. Even worse, physicians must spend much of this time performing data entry and reading medical records, causing actual talk time between patients and physicians to total just over 10 minutes. Consequently, many patients reported having felt rushed during their visit, and many practitioners are admitting feeling burnt-out. Fortunately, improved A.I technology will be able to alleviate these problems by conducting the bulk of data digestion and analysis for the doctors, which will allow clinicians to focus on the patients themselves.

This unfulfilled market for empathetic, patient-focused care suggests that artificial intelligence will not serve simply as a complete replacement for all health professionals; instead, it will eliminate the minutiae of rote tasks and allow professionals to truly connect with their patients. As venture-capitalist Kai-fu Lee argues in his book AI Superpowers, “[A.I] lets all doctors and nurses focus on the human tasks that no machine can do: making patients feel cared for and consoling them when the diagnosis isn’t bright.” Discussing radiologists in particular, Eric Topol asserts in his book Deep Medicine, “It will be the radiologist who… is best positioned to communicate results to patients and provide guidance for how to respond to them.” Overall, much of the job disruption artificial intelligence will initiate within the health sector entails changes in job descriptions rather than the familiar cycle of job loss and reinvention typically observed in periods of creative destruction.

Moreover, this emphasis on quality, patient-driven care holds numerous other economic ramifications. It will lead to reduced healthcare costs. Not only will more attentive care decrease medical bills, but professionals will be more cost efficient by focusing on valuable tasks. More careful diagnoses, for example, will trim the burgeoning issue of misdiagnoses and unnecessary prescriptions. The already fast-growing biotechnology sector will only accelerate. According to Statista, from 2012 to 2016, revenues in the biotech industry increased from $89.7 billion to $139.4 billion, and the number of public companies from 602 to 704. With the culture shift to personal care entailed by artificial intelligence’s entrance into traditional medicine, as well as by the capabilities of artificial intelligence itself, this industry will only expand in the future.

Overall, a shift to what Goleman labeled “medicine that cares” will do wonders for all participants in the health sector. That being said, A.I remains limited in its capabilities. The IBM Watson MD Anderson debacle demonstrates these limitations. The company lost $62 million in a failed attempt to turn Watson onto curing cancer. A.I remains in his nascent stages, and it will be some time before the healthcare industry experiences true disruption at its hands. However, its future capabilities are endless, and while disruption will always mean discomfort and some losses, A.I’s potential to catalyze a return to “medicine that cares” will do wonders for healthcare economics at large.

The possibility of intelligence in the inorganic world has been a fascination to man dating back at least to ancient Greece when mythical beings of bronze and ivory protected islands and seduced men. Now, a couple thousand years later, we are rapidly closing the gap between these myths and reality. The possibility of developing a computerized mind accompanied the advent of computing science in the early-to-mid 20th century, and in 1956 the formal study of this subject was born here in Hanover at the Dartmouth Summer Research Project on Artificial Intelligence.

Google’s victory over the world champion Go player in March marks the latest in a series of publicity stunts that have tracked the rapid advancement of artificial intelligence (AI). Previous such displays have included IBM’s triumphs in chess with their computer Deep Blue and Jeopardy! with Watson.

These events may seem to be all fun and games on the surface, but there is a dramatic shift in the AI industry that is taking place behind the scenes. According to Quid, a data firm, spending on AI deals by tech giants like Facebook, Google, Microsoft and Baidu quadrupled between 2010 and 2015. This spending spike is symptomatic of a high stakes race among tech companies to become the preeminent supplier and developer of AI technology, which promises to be the platform of the future much like the PC operating system and Google’s search engine were before.

To build an artificial brain computer scientists need real ones, which has made human capital the most precious and therefore most aggressively pursued resource in this clash of the tech titans. This explosion in demand for AI talent has created fears of a brain-drain at America’s most elite universities. In the “AI Winter” of the 1980’s and 1990’s when the research being done wasn’t nearly as marketable, the best and brightest in the field found academia to be the most welcoming and lucrative option. Now, however, professors are finding it hard to hold on to their grad-students who are being lured away, even before they graduate, by million-dollar salaries – the promise of making a tangible impact and freedom from a world of uncertain academic funding. As Andrew Moore, the dean of Carnegie Mellon University’s computer-science department, stated in a recent Economist article, this phenomenon raises concerns about a possible “seed corn” problem that drained many top universities of the very resources necessary to produce the next generation of talent. Yet, these same salaries that are so effectively luring talent out of academia ought to lure talent into it. After all, it is in the best interest of the tech companies to maintain a sufficient pool of talent from which to draw.

If the development of superior AI is a race, what does it mean to win? Given the nature of AI, this is a loaded question. Because an artificial intelligence system would learn and improve upon itself, the better system would do so more quickly than its competitors leading to a snowball effect that could quickly lead to a drastic imbalance in the industry. This combined with the existential and moral concerns attached to the field has lead some big names in the industry, like Elon Musk, to take precaution in preventing a single entity from gaining too much power. Musk went about this by pledging $1 billion along with other donors to fund OpenAI, a non-profit research organization whose goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

So who among the tech titans appears to pose the greatest threat of becoming the AI hegemon? Given its track record, it is likely no surprise that the perennial powerhouse Google is developing a convincing lead in the field. Not only is it well-equipped with the capital and infrastructure to pursue such an economy of scope but it has also demonstrated impressive foresight and instinct in their approach to the issue. As one of the first tech conglomerates to dip their toes into AI, Google recruited Stanford researcher Andrew Ng in 2011, providing him the funding and freedom to pursue advancements in the field of “deep learning” and therein kick-starting the project known as “Google Brain.” Deep learning is a method for machine-learning based on the construction of artificial “neural networks” that mimic the functioning of the human brain.

This method is superior to others in that it is a general-purpose technology that can be applied to myriad specific tasks using a singular algorithm for processing and learning from data. Of course, Google wouldn’t dare put all its eggs in one basket and in early 2014 supplemented their AI efforts with the purchase of DeepMind. Reportedly shelling out some $600 million for the firm, Google incubated the efforts of Demis Hassabis, DeepMind’s founder, in his quest to “solve” artificial intelligence using a method known as reinforcement learning. AlphaGo, the software that recently beat the world champion at Go, is based on this technique. Not only is Google leading the way in terms of developing the underlying software for AI, but it is also making considerable inroads in applying these technologies. According to the California DMV, as of March 1, 2016 Google has 73 test-permits for self-driving cars compared to Tesla’s eight (the second most in the state).

While some like Elon Musk and Stephen Hawking have voiced their concerns about the existential threat of rapidly advancing AI, there are more salient concerns to be had about its impact on our lives. The heart and soul of AI in its current applications are data. For example, the machine-learning aspects of the software behind Google’s targeted advertising are based on collecting information about users’ day-to-day online behavior. As the applications of AI expand into areas such as self-driving cars, the amount of data collected about consumers and thier behaviors will skyrocket. Then these machine-learning algorithms will be processing oceans of data regarding daily travel habits including things like where everyday people eat, shop, work, etc. Giving away the control over a car may influence someone’s future travel based on marketing tools like corporate sponsorship without the consumer’s awareness. The car will likely even listen to passengers’ conversations. As The Atlantic’s Adrienne LaFrance put it, “In this near-future filled with self-driving cars, the price of convenience is surveillance.”

AI has a profound potential to benefit humanity but we must also respect its potentially adverse manifestation. Clearly there must be regulation of the means, methods and motivations behind the technology’s advancement but we must be careful not to stifle its progress. The world has reached an inflection point in terms of artificial intelligence and its future will be a balancing act at the pace of Moore’s law.