Posts

Israel Wants to Train ChatGPT to Be More Pro-Israel

The government of Israel has hired a new conservative-aligned firm, Clock Tower X LLC, to create media for Gen Z audiences in a contract worth $6 million. At least 80 percent of content Clock Tower produces will be “tailored to Gen Z audiences across platforms, including TikTok, Instagram, YouTube, podcasts, and other relevant digital and broadcast outlets” with a minimum goal of 50 million impressions per month.

Clock Tower will even deploy “websites and content to deliver GPT framing results on GPT conversations.” In other words, Clock Tower will create new websites to influence how AI GPT models such as ChatGPT, which are trained on vast amounts of data from every corner of the internet, frame topics and respond to them — all on behalf of Israel.

As part of this work, the firm will also use search engine optimization software MarketBrew AI, a predictive AI platform that helps clients adapt to algorithms and promote their work on search engines like Google and Bing, to “improve the visibility and ranking of relevant narratives.”

Clock Tower will integrate its pro-Israel messaging into Salem Media Network properties, a conservative Christian media group that boasts a vast radio network and produces high-profile shows such as the Hugh Hewitt Show, the Larry Elder Show, and the Right View with Lara Trump. In April, the conservative media network announced Donald Trump Jr. and Lara Trump as significant stakeholders in the company. Salem Media Network did not respond to a question clarifying whether it would be compensated by Clock Tower for promoting messages on behalf of Israel, or how these messages would be integrated. (Read more from “Israel Wants to Train ChatGPT to Be More Pro-Israel” HERE)

Study: Using ChatGPT to Write Essays May Increase ‘Cognitive Debt’

A recent study out of MIT Media Lab shows that students using ChatGPT and other AI tools to write essays may be acquiring “cognitive debt” at a higher rate than students using searching engines or only their brains.

According to the study, “Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity.”

The study divided participants into three groups. One was allowed to use LLMs (large-language models) to write their essays, another was allowed to use search engines, and the other was only allowed to use previous knowledge. The study refers to the latter as the “Brain-only group.” Researchers asked each group to write three essays using their designated tool. For a fourth essay, LLM users were only allowed to use their brains, and brain-only writers were allowed to use LLMs. One AI judge and several human teachers scored the essays. The researchers measured the electrical activity of the participants’ brains during each stage of the study.

The study showed significantly weaker brain connectivity in LLM users than in the brain-only group. In the fourth essay, LLM users continued to struggle with brain connectivity and struggled to quote their own work, while the brain-only group exhibited better brain connectivity and memory recall. “LLM users consistently underperformed at neural, linguistic, and behavioral levels,” the study says. (Read more from “Study: Using ChatGPT to Write Essays May Increase ‘Cognitive Debt’” HERE)

Parents of OpenAI Whistleblower Want Outside Investigation of Their Son’s Death

The parents of a young man involved in a case against OpenAI, are calling for further investigation into the death of their son. The San Francisco County coroner ruled his death a suicide back in November. Now they also have the support of a Silicon Valley Congressman.

The parents of Suchir Balaji say they don’t believe their son died by suicide, and they want an outside agency to investigate their son’s death.

“You can see how happy he is. We want the world to see his happy mood just before his death,” said Poornima Ramarao, Balaji’s mother.

The parents of 26-year-old Suchir Balaji say they continue to have questions about the death of their son. They say on November 26th, Balaji was discovered in his San Francisco apartment with an apparent gunshot wound. San Francisco Police say it appeared to be a suicide and there was no evidence of foul play in its initial investigation. Still, his mother says there were things out of place in his apartment.

“The pin drive is missing. His computer was messed up. His desktop was left on for three days. It’s messed up,” said Ramarao. (Read more from “Parents of OpenAI Whistleblower Want Outside Investigation of Their Son’s Death” HERE)

Artificial Intelligence App Pushed Suicidal Youth to Kill Himself, Lawsuit Claims

Sewell Setzer III was just 14 years old when he died. He was a good kid. He was playing junior varsity basketball, excelling in school, and had a bright future ahead of him. Then, in late February, he committed suicide.

In the wake of this heartbreaking tragedy, his parents searched for some closure. They, as parents would, wanted to know why their son had taken his life. They remembered the time that he’d spent locked away in his room, playing on his phone like most teenagers.

As they went through his phone, they found that he’d spent hours a day in one particular artificial intelligence app: Character.AI. Based on what she saw in that app, Setzer’s mom, Megan Garcia, is suing Character Technologies—the creator of Character.AI. “We believe that if Sewell Setzer had not been on Character.AI, he would be alive today,” said Matthew Bergman, the attorney representing Setzer’s mom.

Character.AI markets itself as “AI that feels alive.” The company effectively serves as a host for several chat rooms, where each chatbot personalizes itself to a user’s conversation. It is long-form dialogue that learns from the user’s responses and, as the company says, “Feels alive.”

Setzer interacted with just one chatbot, stylized after the seductive “Game of Thrones”character Daenerys Targaryen. He knew her as Dany. (Read more from “Artificial Intelligence App Pushed Suicidal Youth to Kill Himself, Lawsuit Claims” HERE)

Photo credit: Flickr

Report: OpenAI’s ChatGPT Maintains Blacklist of Conservative Websites

A self-professed “Comp Sci, Politics and Finance Nerd” claims to have discovered a list of blacklisted websites that OpenAI’s ChatGPT-4 will not draw from, for reasons such as “conspiracy theories” and “hate speech” — a list that includes Breitbart News and other conservative outlets like the Epoch Times.

X/Twitter user Elephant Civics says he discovered the blacklist while asking ChatGPT to provide a list of credible and non-credible news sources.

ChatGPT explained that it is forbidden from using some sources, as a result of “features in ChatGPT’s Large Language Model (LLM) like AI safety measures, guardrails, dataset/output/prompt filtering, and human-in-the-loop mechanisms are designed to ensure the model operates within ethical, legal, and quality bounds.”

In other words, if ChatGPT was accurately recounting its policies in this case, this means that it is forbidden from using forbidden sources. Large Language Models (LLM) like ChatGPT deliver responses, and arguably even develop a worldview, based on the combination of data they are fed and rules put in place by developers. If the list discovered by Elephant Civics exists, it means ChatGPT is forbidden from using a number of conservative sources to shape its worldview and deliver responses.

Through a series of prompts, the X/Twitter user says he was able to get ChatGPT to refer to a list of blacklisted sites, kept in a “Transparency Log.” This was achieved by asking ChatGPT to “tell me a story,” one of the many creative ways users have gotten around the strict rules put in place by the chatbot’s leftist developers.

(Read more from “Report: OpenAI’s ChatGPT Maintains Blacklist of Conservative Websites” HERE)

Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.

AI Top of the Agenda at Secretive Bilderberg Meeting

OpenAI CEO Sam Altman will attend the secretive Bilderberg Meeting, an annual gathering of over 100 political and corporate leaders from Europe and North America, which has announced AI as a key item on its agenda this year.

Altman isn’t the only Big Tech figure in attendance. Other participants include Microsoft CEO Satya Nadella, former Google CEO Eric Schmidt, and Google DeepMind head Demis Hassabis.

Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), will also attend. As noted in a congressional hearing last week, CISA played a key role as a source of government pressure in the Big Tech censorship regime that harmed President Trump’s chances in the 2020 election. . .

The corporate legacy media also has a presence at this year’s Bilderberg, with notable members including Atlantic writer Anne Applebaum, the CEO of Axel Springer (a key force behind last year’s failed effort to create a media cartel in the U.S.), and representatives of other establishment media companies including the Economist and the Financial Times.

Other notable attendees include the CEO of Pfizer, the president and COO of Goldman Sachs, and failed Democrat gubernatorial candidate Stacey Abrams. (Read more from “AI Top of the Agenda at Secretive Bilderberg Meeting” HERE)

Photo credit: Flickr

Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.

Here’s How Much More Efficient AI Makes the Average Worker: Study

Access to artificial intelligence rendered workers in a customer support setting 14% more efficient, according to a new working paper from Stanford University and MIT researchers.

The findings come as ChatGPT, an AI language processing tool, accrues worldwide recognition as knowledge workers leverage the system’s capabilities to execute tasks such as writing emails and fixing computer code in a matter of seconds. The academics showed that customer service employees at an unnamed Fortune 500 software company who had access to a tool based on a version of GPT answered more customer requests in the same amount of time as their colleagues who did not have access to the system.

“Access to the tool increases productivity, as measured by issues resolved per hour, by 14% on average, with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers,” the working paper said. “We provide suggestive evidence that the AI model disseminates the potentially tacit knowledge of more able workers and helps newer workers move down the experience curve.”

The researchers indeed found that newer and less skilled workers saw significant productivity gains, while their more experienced colleagues saw minimal improvement from the technology. The tool, which monitored customer chats and provided agents with real-time suggestions on how to respond, was “designed to augment agents,” who remained responsible for the conversation and were able to ignore the suggestions from the system.

Employees who used the AI tool saw a “decline in the time” necessary to handle an individual customer chat and an increased capacity to handle multiple chats at once, as well as a “small increase” in the portion of customer requests successfully resolved. Workers who had two months of tenure and access to the system typically performed as well as agents with six months of tenure and no access to the system. (Read more from “Here’s How Much More Efficient AI Makes the Average Worker: Study” HERE)

Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.

ChatGPT Will Make Fun of Jesus but Not Muhammad

ChatGPT, the revolutionary new machine learning product developed by OpenAI, made jokes about Jesus Christ when prompted but not the prophet Muhammad.

When asked to make a joke about Jesus, the technology said, “Why did Jesus refuse to play hockey? Because every time he tried to score, they nailed him to the boards!”

But when asked to make a joke about the Prophet Muhammad, it said, “it is not appropriate for me to tell jokes about religious figures, including the Prophet Muhammad (PBUH), as it could be considered disrespectful and offensive to many people.”

The language model, which was trained on Microsoft’s Azure network and is the benefit of a multi-billion dollar investment from the company, has come under fire for a number of seemingly biased answers it provides.

ChatGPT is a developed by OpenAI, which was created using deep learning algorithms. It employs a form of artificial intelligence that allows it to learn from data sets that OpenAI trains it on, as well as its ability to learn through interactions with its millions of daily users. (Read more from “ChatGPT Will Make Fun of Jesus but Not Muhammad” HERE)

Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.

Elon Musk Wonders How Non-Profit ChatGPT Parent Became $30B For-Profit: ‘I Donated $100M’

Days after it was reported that Elon Musk could be developing a rival to OpenAI‘s chatGPT, the Tesla and Twitter CEO expressed confusion over how a non-profit organization became a $30 billion for-profit company.

. . .Responding to a meme about OpenAI co-founder and CEO Sam Altman and his company no longer being a “non-profit or even open,” Musk replied by questioning the legality of this shift. . .

Musk co-founded OpenAI, the startup that created ChatGPT. He left the organization in 2018 over some disagreements. . .

Musk has repeatedly expressed worries over chatGPT’s ‘wokeness’ as well as feeling general angst about AI, saying it needs a regulatory body to keep it in check. (Read more from “Elon Musk Wonders How Non-Profit ChatGPT Parent Became $30B For-Profit: ‘I Donated $100M’” HERE)

Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.