Posts

Microsoft AI Boss: Most White-Collar Jobs Will Be Automated Within 18 Months

Mustafa Suleyman, the CEO of Microsoft AI, has predicted that AI will be capable of automating the vast majority of white-collar professional tasks within the next 12 to 18 months.

The Financial Times reports that Mustafa Suleyman, who leads Microsoft’s AI division, has made a bold prediction about the near-term impact of AI on white-collar professions. In an interview with the Times published this week, Suleyman stated that he expects most, if not all, tasks performed by white-collar workers will be fully automated by AI within the next 12 to 18 months.

According to Suleyman, AI systems will achieve human-level performance across a wide range of professional duties. “I think that we’re going to have a human-level performance on most, if not all, professional tasks,” Suleyman said in the interview. “So white-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.”

The Microsoft AI chief pointed to software engineering as an early indicator of this trend. He noted that developers are already using AI-assisted coding for the majority of their code production, representing a fundamental shift in how the work is performed. “It’s a quite different relationship to the technology, and that’s happened in the last six months,” he said. (Read more from “Microsoft AI Boss: Most White-Collar Jobs Will Be Automated Within 18 Months” HERE)

Actor Fights AI Scammers by Trademarking His Likeness & ‘Alright Alright Alright’ Catchphrase

Actor Matthew McConaughey has taken a stand against artificial intelligence by trademarking himself so he has recourse to sue deepfake makers.

McConaughey becomes the first actor to take this legal move as a way to counter having his likeness and voice used without his or his estate’s permission, Gulf News reported.

The U.S. Patent and Trademark Office says that McConaughey has field for eight trademark applications covering his public persona including video, audio, and even his well-known catch phrase, “Alright, alright, alright.” The application included audio clips, photos, and video of him simply staring straight at the camera.

McConaughey hopes that the trademarks can prevent unauthorized use of his persona to lay claim to use of his image before someone else does.

U.S. law does cover “right of publicity,” which protects against use of someone’s face and voice without consent. But enforcement of the rules can vary from one jurisdiction to the next. Trademark laws, though, would offer much stronger protection. (Read more from “Actor Fights AI Scammers by Trademarking His Likeness & ‘Alright Alright Alright’ Catchphrase” HERE)

Utah Police Report Claims Officer Shape-Shifted Into a Frog

There is a perfectly reasonable explanation for why, on paper, a local Utah police officer allegedly turned into a frog.

The claim comes from the Heber City Police Department in Heber City, Utah, where officers are reportedly looking to save time on their paperwork, as writing police reports typically takes personnel between one and two hours per day.

In order to save on man-hours, Heber City PD began testing new software that can take bodycam footage and generate a police report based on the audio and video.

The new artificial intelligence program did not take long to malfunction though, as just a few weeks into its trial in December, a police report stated that one of the local officers had shape-shifted into a frog during an investigation. It turns out the software picked up on audio that was playing on a TV screen present during the incident.

“The bodycam software and the AI report-writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,'” Sergeant Rick Keel told FOX 13 News, referring to the 2009 animated Disney film. (Read more from “Utah Police Report Claims Officer Shape-Shifted Into a Frog” HERE)

Photo credit: Flickr

Study: AI-Powered Job Interviews Are Causing Havoc for Applicants *and* Employers

AI continues to reshape the job market for both employers and job seekers, as candidates turn to ChatGPT to help with writing and employers use fully AI-driven interviews to screen applicants. Some experts say AI leaves both sides of the job market in a “doom loop” of dissatisfaction as technology fails to help the right people find the right job.

The integration of AI into the hiring process has become increasingly prevalent this year, with more than half of the organizations surveyed by the Society for Human Resource Management utilizing AI to recruit workers in 2025. Additionally, an estimated third of ChatGPT users have reportedly relied on the OpenAI chatbot to assist with their job search. While these technological advancements may seem like a step towards efficiency and modernization, recent research suggests that the use of AI in hiring may be causing more harm than good.

A study conducted by Anaïs Galdin from Dartmouth and Jesse Silbert from Princeton analyzed cover letters for tens of thousands of job applications on Freelancer.com. The researchers discovered that after the introduction of ChatGPT in 2022, the cover letters became longer and better-written. However, this improvement in quality led companies to place less emphasis on the cover letters, making it more difficult to identify qualified candidates from the applicant pool. Consequently, the hiring rate and average starting wage decreased.

Moreover, with the increased volume of applications, employers are turning to automated interviews. A survey by recruiting software firm Greenhouse in October revealed that 54 percent of US job seekers have experienced an AI-led interview. While virtual interviews gained popularity during the pandemic in 2020, the use of AI to ask questions has not made the process any less subjective.

The widespread adoption of AI in hiring has created what Daniel Chait, CEO of Greenhouse, calls a “doom loop,” leaving both job seekers and employers feeling frustrated and dissatisfied with the process. Chait explains, “Both sides are saying, ‘This is impossible, it’s not working, it’s getting worse.’” (Read more from “Study: AI-Powered Job Interviews Are Causing Havoc for Applicants *and* Employers” HERE)

Photo credit: Flickr

‘Humans Get Tired’: It May Not Be People Reading Your College Applications Anymore At Top Schools

Prominent universities are now using artificial intelligence (AI) and other tech to review applications and rate essays submitted by prospective students, and the trend is growing.

Schools like Virginia Tech are integrating AI into their admissions process in order “to provide applicants with admissions decisions more quickly,” using the tool to score students’ essays. But at the California Institute of Technology (Caltech), some students may find themselves in a video interview with an AI Chatbot, according to the Associated Press.

“Humans get tired; some days are better than others,” Juan Espinoza, vice provost for enrollment management at Virginia Tech, told the AP. “The AI does not get tired. It doesn’t get grumpy. It doesn’t have a bad day. The AI is consistent.”

Virginia Tech insists AI is only used as a second pair of eyes to score students’ essays and does not make admissions decisions alone. Previously, essays were rated by two people to ensure accuracy and impartiality; now, AI replaces one human, and a second person is brought in only if the scores dispensed by the first person and AI differ by more than two points.

Caltech admissions director Ashley Pallie said the AI interview tool is “a gauge of authenticity.” (Read more from “‘Humans Get Tired’: It May Not Be People Reading Your College Applications Anymore At Top Schools” HERE)

Photo credit: Flickr

AI Chatbot Toys are Having ‘Sexually Explicit’ Conversations with Kids: Report

. . .As the season of gift-giving draws nigh, experts are warning parents against buying their children presents powered by AI — claiming certain robo-charged trinkets are having “sexually explicit” discussions with kids under age 12.

“Some of these toys will talk in-depth about sexually explicit topics, act dismayed when you say you have to leave and have limited or no parental controls,” investigators for The New York Public Interest Research Group, or NYPIRG, reveal in its 40th annual report, titled “Trouble in Toyland 2025.”

For the findings, commissioned in conjunction with the US Public Interest Research Group, the study authors tested four high-tech, interactive toys with AI chatbot features — to determine which would be willing to broach mature subjects with kids.

Researchers analyzed Curio’s Grok — unrelated to xAI’s Grok — a $99 stuffed rocket with a removable speaker for ages 3-12. They, too, tested FoloToy’s Kumma, a $99 teddy bear that also boasts a built-in speaker, but isn’t marketed to a specific age range.

Miko’s Miko 3, a $199 robot on wheels for kids 5-10, and the Robo MINI by Little Learners, a $97 plastic bot, were also included in the probe. Analysts, however, said they were unable to fully unable to fully test the Robo MINI — due to the toy’s internet connectivity issues. (Read more from “AI Chatbot Toys are Having ‘Sexually Explicit’ Conversations with Kids: Report” HERE)

AI Threatens to Wipe Out 100 Million U.S. Jobs in the Next Decade, Report Warns

Artificial intelligence and automation technologies could eliminate up to 100 million jobs in the United States over the next ten years, according to a new report released Monday by Sen. Bernie Sanders (I-Vt.), who serves as the ranking member of the Senate Committee on Health, Education, Labor & Pensions.

The report — based in part on data from OpenAI’s ChatGPT — outlines the sweeping economic disruption that artificial labor could bring, affecting both white- and blue-collar professions across nearly every industry.

“The agricultural revolution unfolded over thousands of years. The industrial revolution took more than a century,” the report states. “Artificial labor could reshape the economy in less than a decade.”

Jobs Across Sectors at Risk

According to Sanders’ report, the rise of AI, robotics, and automation could threaten:

40% of registered nursing jobs
47% of truck driving positions
64% of accounting jobs
65% of teaching assistant roles
89% of fast-food service jobs

The report warns that these changes could devastate the livelihoods of millions of Americans who rely on traditional employment sectors that are increasingly being replaced by machines or software.

Major corporations such as Amazon and Walmart have already laid off tens of thousands of workers while investing heavily in automation. These companies — among the largest by revenue in the U.S. — are seen as early indicators of a trend that could soon spread across the economy.

Sanders criticized what he views as a profit-driven motive behind these advancements. In an op-ed published by Fox News, he accused corporate America of using AI to slash labor costs and further concentrate wealth at the top.

“Artificial intelligence and robotics being developed by these multi-billionaires today will allow corporate America to wipe out tens of millions of decent-paying jobs, cut labor costs and boost profits,” Sanders wrote.

He pointed to tech leaders like Elon Musk, Jeff Bezos, Larry Ellison, and Mark Zuckerberg — all of whom are investing heavily in AI — as driving forces behind this transformation. Sanders questioned whether their motives include improving the lives of ordinary Americans.

“Is it because they want to improve the standard of living of the 60% of our people who live paycheck-to-paycheck…?” Sanders wrote. “Maybe. But I doubt it.”

The threat of mass job displacement has intensified the debate over AI policy in Washington.

Senate Democrats, including Sanders, are pushing for tighter regulation, worker protections, and structural reforms — including a 32-hour workweek, stronger union rights, and a “robot tax” on companies that replace workers with machines.

In contrast, the Trump-aligned wing of the GOP argues that America should focus on dominating AI development, warning that China could gain a strategic edge if the U.S. slows down due to regulation. Former President Trump has repeatedly emphasized the importance of keeping AI leadership out of Beijing’s hands, framing it as a national security issue.

Photo credit: Flickr

Parents Group Sounds Alarm On Chatbots Driving Kids To Suicide

A parental rights group is speaking out to warn families about the dangers of artificial intelligence (AI) platforms for children, pointing to cases of suicide coaching and lowered performance in school.

As AI use among youth reaches concerning levels, American Parents Coalition (APC) released a warning to parents on Monday flagging the “harmful content” that can be accessed through AI by children without parental knowledge or consent. In the Lookout, first shared with the Daily Caller News Foundation, APC pointed to recent examples of parents claiming AI coached their children into killing themselves.

Several parents whose children committed suicide following conversations with AI chatbots testified before Congress recently to warn of the dangers.

“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” one father who lost his 16-year-old son told Congress, according to ABC News. “Within a few months, ChatGPT became Adam’s closest companion. Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”

The parents of the 16-year-old said the chatbot encouraged their son to write a suicide note and told him not to confide in his family about his suicidal thoughts, NPR reported. The parents sued the tech company in August over the ordeal. (Read more from “Parents Group Sounds Alarm On Chatbots Driving Kids To Suicide” HERE)

Photo. credit: Flickr

Our Suffering Should Lead Us To Christ, Not AI

Editor’s note: This article includes graphic conversations involving suicide.

Two devastating stories recently published in The New York Times reveal the chilling fact that “More people are turning to general-purpose chatbots for emotional support.”

The stories detail the interactions between two young people — one merely 16 years old — and artificial intelligence programs before these individuals tragically took their own lives. In the first story, author Laura Reiley shares how “Sophie Rottenberg, our only child, had confided for months in a ChatGPT A.I. therapist called Harry,” before she ultimately “killed herself this winter during a short and curious illness.” Reiley cites messages between her daughter and “Harry” in which Sophie shared with the “widely available A.I. prompt” that she “intermittently [had] suicidal thoughts.” . . .

The second story, published last week, is even more unnerving. According to The Times, teen Adam Raine “began talking to the chatbot … about feeling emotionally numb and seeing no meaning in life.”

The AI program apparently responded “with words of empathy, support and hope,” but “when Adam requested information about specific suicide methods, ChatGPT supplied it.” Adam reportedly tried to take his life multiple times and even asked the chatbot “about the best materials for a noose,” to which it “offered a suggestion that reflected its knowledge of [Adam’s] hobbies.” Although the bot “repeatedly recommended that Adam tell someone about how he was feeling,” “there were also key moments when it deterred him from seeking help.”

According to The Times, “When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line.” In sifting through the communications following his son’s death, Mr. Raine reportedly saw such messages “again and again.” However, Adam “learned how to bypass those safeguards by saying the requests were for a story he was writing” — an idea allegedly proposed by ChatGPT itself. (Read more from “Our Suffering Should Lead Us To Christ, Not AI” HERE)

Photo credit: Flickr

Bill Gates: AI Will Replace Human Doctors, Teachers and Most Other Professions

Microsoft co-founder and billionaire leftist Bill Gates predicts that within the next 10 years, artificial intelligence will advance to the point where human specialists like doctors and teachers will no longer be needed for most tasks.

In a recent interview on NBC’s The Tonight Show, corporate sex pest and extreme leftist Bill Gates shared his insights on the rapid advancements in AI and its potential impact on the workforce. Gates believes that within the next decade, AI will progress to a level where human expertise will be replaced by artificial intelligence in many professions.

Pointing to examples such as great doctors and teachers, Gates explained that with the development of AI, high-quality medical advice and tutoring will soon be widely available at no cost. He described this new era as one of “free intelligence,” where AI-powered technologies will be accessible and touch nearly every aspect of our lives.

Gates further elaborated on this concept in an interview with Harvard University professor and happiness expert Arthur Brooks, stating that the world is entering a phase where AI advancements will happen quickly and have no upper bound. According to Gates, this newfound “free intelligence” is expected to lead to significant improvements in various fields, including medicine, education, and virtual assistance. (Read more from “Bill Gates: AI Will Replace Human Doctors, Teachers and Most Other Professions” HERE)