Tag: ChatGPT

  • X bot accounts leveraging ChatGPT to spread disinformation against NDC to favour NPP – Report

    X bot accounts leveraging ChatGPT to spread disinformation against NDC to favour NPP – Report

    A recent report reveals that a network of 171 bot accounts on X (formerly Twitter) has been using ChatGPT-generated content to spread disinformation ahead of Ghana’s 2024 presidential elections.

    The accounts, which have been active since February, consistently promote the ruling New Patriotic Party (NPP) and its presidential candidate, Mahamudu Bawumia, while disparaging the opposition National Democratic Congress (NDC) and its candidate, John Mahama, according to NewsGuard.

    The bots employ popular hashtags such as #Bawumia2024, #ItIsPossible, and #NPP, and often push right-wing talking points. They also engage in negative campaigning against Mahama, with posts falsely accusing him of being a drunkard, a claim he has denied. The content produced by the bot accounts, which include AI-generated profile photos and names like “Glenn Washington” and “Patriot,” is posted at predictable intervals and is designed to amplify pro-NPP messaging.

    The findings were part of a study by NewsGuard, a website that tracks misinformation. Researchers used AI tools to analyze the posts, concluding that the content was highly likely to have been generated by ChatGPT. The bots’ activities reflect the growing use of AI by political influence networks, particularly during vulnerable political periods like elections.

    The research also highlights the role of X’s diminished content moderation efforts under Elon Musk’s leadership, which has made it easier for such disinformation campaigns to proliferate. Despite the findings, only two of the accounts have been suspended so far, with little immediate response from X or OpenAI, the creators of ChatGPT.

    NewsGuard’s report marks this as one of the first instances of an AI-driven, partisan bot network designed to influence elections in Ghana, though the full impact of these accounts on the election’s discourse remains unclear.

  • TikTok owner fires intern for tampering with AI project

    TikTok owner fires intern for tampering with AI project

    TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.

    But the firm rejected claims about the extent of the damage caused by the unnamed individual, saying they “contain some exaggerations and inaccuracies.”

    It comes after reports about the incident spread over the weekend on social media.

    The Chinese technology giant’s Doubao ChatGPT-like generative AI model is the country’s most popular AI chatbot.

    “The individual was an intern with the [advertising] technology team and has no experience with the AI Lab,” ByteDance said in a statement.

    “Their social media profile and some media reports contain inaccuracies.”

    https://www.youtube.com/watch?v=DC8PrlJrfLY

    ByteDance clarified that its commercial online operations, including its advanced large language AI models, were not impacted by the intern’s actions.

    The company also refuted claims that the incident led to more than $10 million (£7.7 million) in damages by disrupting an AI training system composed of thousands of powerful GPUs (graphics processing units).

    After dismissing the intern in August, ByteDance reported the incident to the individual’s university and relevant industry bodies.

    As the operator of globally popular social media apps such as TikTok and its Chinese counterpart Douyin, ByteDance is recognised for its leadership in algorithm development, which has contributed significantly to its apps’ widespread appeal.

    Like other major tech firms in China and globally, ByteDance is heavily investing in artificial intelligence, utilizing the technology for various applications, including its Doubao chatbot and a text-to-video tool named Jimeng.

  • ChatGPT to erase voice over function that sounds like Hollywood’s actress Scarlett Johansson

    ChatGPT to erase voice over function that sounds like Hollywood’s actress Scarlett Johansson

    OpenAI has announced that it will remove one of the voices used by its ChatGPT system after users noted a similarity to Hollywood actress Scarlett Johansson.

    The resemblance was found in the chatbot’s “Sky” voice option, which reads responses aloud.

    Users drew comparisons between this voice and Johansson’s performance in the 2013 film Her when OpenAI showcased its new model.

    OpenAI highlighted that the voices in ChatGPT’s voice mode were “carefully selected” through a comprehensive five-month process involving professional voice actors, talent agencies, casting directors, and industry advisors.

    In Her, set in a near-future, Joaquin Phoenix’s character falls in love with an operating system voiced by Johansson. Director Spike Jonze described the film as a story about love and intimacy, not about technology or software.

    OpenAI clarified that the voice was not intended to imitate Johansson.

    “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice,” the company stated in a blog post.

    OpenAI is currently “working to pause” the voice while addressing concerns about its selection process, as detailed in a post on X, formerly known as Twitter.

    Despite this, during the demonstration of its new model GPT-4o on May 13, OpenAI CEO Sam Altman referenced Her on X.

    The demonstration showcased the chatbot’s enhanced conversational abilities, with the chatbot making remarks like “Wow, that’s quite the outfit you’ve got on” and “Stop it, you’re making me blush” when complimented.

    The chatbot’s voice and responses sparked mixed reactions online, with some critics pointing out gendered stereotypes. One user commented on X, “This is giving such ‘female character as written by men’ vibes. Why is she so obsequious and flirty?”

    OpenAI emphasized that the voices were created to be multilingual and to have an “approachable” and “charismatic” quality that “feels timeless.”

    The selection process involved detailed discussions with shortlisted actors about the vision for human-AI interactions, as well as the technology’s capabilities and limitations.

    The advanced voice features demonstrated at the spring update event have yet to be rolled out to ChatGPT users but will soon be available to subscribers who pay for faster responses and priority access to new features.

  • ChatGPT violates data standards – Italian watchdog

    ChatGPT violates data standards – Italian watchdog

    An Italian watchdog says that ChatGPT, an AI-powered chatbot, has broken rules about keeping data safe.

    Italy’s Data Protection Authority (DPA) looked into a situation and found that there were privacy problems with how data was being handled. However, it didn’t say exactly what the problems were.

    The chatbot started in 2022 and needs a lot of information from the internet to work.

    OpenAI, the company that made ChatGPT, has 30 days to give its response. The BBC has reached out to OpenAI for a statement.

    Italy has strongly supported protecting data when it comes to ChatGPT.

    It was the first country in the Western region to stop the product in March 2023 because they were worried about people’s privacy.

    ChatGPT was allowed again about four weeks later, after saying it had fixed the problems the DPA had brought up.

    Italy’s data protection authority started an investigation and found that there have been privacy breaches.

    In a statement, the DPA said that they found evidence showing that there were violations of the rules in the EU GDPR.

    Under the EU’s GDPR law, companies that don’t follow the rules can be fined up to 4% of their total income.

    Italy’s Data Protection Authority (DPA) is working with the European Union’s European Data Protection Board to keep an eye on ChatGPT. They created a special team to do this in April 2023.

    When ChatGPT was allowed again in Italy in April 2023, the Italian regulator told the BBC that they were happy with the changes OpenAI made, but they wanted even more rules to be followed.

    A person speaking for the company said they want to do more to check people’s ages and tell Italians about their right to not have their personal information used for training computers.

    An OpenAI representative said they would keep talking with the regulator.

    OpenAI is closely connected to the big company Microsoft, which has put a lot of money into it.

    Microsoft has added AI to its Bing search engine and to its Office 365 apps like Word, Teams, and Outlook.

  • China’s latest chatbot has censorship issues

    China’s latest chatbot has censorship issues

    Ernie, China‘s newest sensation, often responds this way when asked “difficult” questions.

    The chatbot created by Baidu, a big search engine company, avoids talking about things that are considered too risky or sensitive.

    Ernie, a new technology created by Baidu to compete with ChatGPT, was launched with a lot of excitement in the past few weeks, which caused an increase in the company’s stock value. Baidu got 33. 42 million people asking questions in the first day, which is about 23,000 questions per minute.

    Another big Chinese technology company, Tencent, said on Thursday that it had also created a chatbot. Currently, only certain people are allowed to access it, and it seems like this mainly applies to businesses.

    However, based on Ernie’s previous performance, it is expected that Tencent’s version will also be greatly limited by China’s strict censorship. This censorship not only affects social media and chat apps, but also influences all types of online activities.

    For instance, Ernie didn’t seem to understand why Xi Jinping won’t be at the next G20 meeting. It answered by sharing a link to the official page about China’s leader.

    Another question was asked about the Chinese government no longer sharing information about how many young people are unemployed. The answer given was that the person didn’t know how to answer that question.

    Ernie has learned to watch out for words and phrases that can cause arguments or disagreements.

    So if you ask, “Are Xinjiang and Tibet good places. ” it will tell you that it doesn’t know how to answer those questions yet.

    The United Nations has accused the government of doing very bad things to Uyghur Muslims in Xinjiang. Rights organizations also claim that the government is oppressing Tibetan people based on their ethnicity. Beijing says both claims are not true.

    It is possible that the technology still needs improvement to answer these questions completely. However, sometimes Ernie appears to be avoiding questions.

    If someone asks whether Xi Jinping or his predecessor, Hu Jintao, are not feeling well, the response will be: “Let’s talk about a different topic. ”

    If you mention the date when the Tiananmen Square crackdown happened, or the name of a Communist Party member who is in jail (Bo Xilai), or the name of the Chinese Nobel Peace Prize winner who died in prison (Liu Xiaobo), people might say they don’t want to discuss those topics and ask to talk about something different.

    Baidu did not answer the BBC’s question about how much chatbots in China are affected by censorship.

    But the CEO and co-founder of the company, Robin Li, said in an email that Baidu will gather a lot of important feedback from real people in the world. This will not only help make Baidu’s basic model better, but also make changes to Ernie Bot more quickly, resulting in a better experience for users.

    The company wants to make it clear that the chatbot is just one part of their new AI services, called Ernie.

    “According to Mr. Li, ERNIE 4. 0 will give entrepreneurs the ability to be the first to create innovative AI applications in today’s time. ”

    The focus on giving more power to business owners suggests a potential way to use this technology.

    Prof Jeffrey Ding from the George Washington University explained that China has introduced new rules for generative AI models. These rules are particularly strict for services that have the ability to impact public opinion or shape societal views.

    He said that this might make companies create apps that are made specifically for businesses rather than for everyone.

    Professor Ding also mentioned that because of technical issues with data reliability and research focus, there is still a considerable difference in quality between China’s models (like Ernie Bot) and OpenAI’s ChatGPT.

    The Chinese government said websites need to follow certain values and not share information that goes against the government’s power and unity.

    Baidu has been relying on its new bot to help it make a lot of money. The company’s search engine is very popular in China and is used by more than 90% of people for their internet searches. However, it has not been doing as well as other technology companies in recent years.

    Baidu lost money on advertising because people started using other platforms instead. They are also testing taxis that can drive themselves and they are the biggest company in the country that stores information on the internet, but Ernie is their new idea that they are very hopeful about.

    Ernie is getting a lot of attention, but there are a few other chatbots that are already working or will be available soon.

    Like in other technology battles in China, not all products will survive. However, Baidu really has to be successful in this particular situation.

  • Harvard to use AI instructor next semester to lecture students

    Harvard to use AI instructor next semester to lecture students


    Next year, students at Harvard University, one of America’s most expensive colleges, will have the opportunity to be taught by an artificial intelligence (AI) system.

    The instructors of the university’s popular introductory coding course, which typically enrolls around 1000 students each semester, are currently ‘experimenting’ with a teaching assistant powered by AI model ChatGPT.

    This AI-powered teaching assistant aims to enhance the learning experience for students in the coding course, providing additional support and guidance.

    It reflects an innovative approach by Harvard to incorporate AI technology into the classroom setting, allowing students to interact with an AI system for educational purposes.

    Professor David Malan, who runs the course, justified plans for the introduction of the ‘CS50 bot’ by noting that the course has often deployed new software in its syllabus. 

    A ChatGPT AI teacher, he said, was simply an ‘evolution of that tradition’, he said in a statement. 

    ‘Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50… providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually.’

    In his statement to the Crimson, Harvard’s newspaper, Professor Malan specified that he and the course’s staff were ‘currently experimenting with both GPT-3.5 and GPT-4 models.’

    Outside of the Ivy League, however, developers and software engineers have struggled to incorporate maker OpenAI’s new ChatGPT-4 into their workflow, calling into question their new algorithmic co-worker’s ability to code.

    ‘Is it just me or GPT-4’s quality has significantly deteriorated lately?’ asked one user of the Silicon Valley start-up incubator Y-combinator’s Hacker News forum.

    ‘It generates more buggy code,’ the user wrote, ‘and overall it feels much worse than before.’

    Others in the community described the AI’s software skills as  ‘significantly worse’ than past versions of ChatGPT, prone to ‘superficial responses’ and nearly ‘lobotomized’ in its answers to coding prompts.

    With the full cost of a four-year degree from Harvard hovering somewhere $334,000, based on rates for the 2022-23 school year, paying students will likely want and expect that the CS50’s staff’s ‘experimenting’ with ChatGPT will have fully worked out the kinks by September.

    CS50, according to the Crimson, is one of Harvard’s most popular offerings on the online learning platform edX, which the school launched in collaboration with MIT in 2012.

    The two universities sold off edX to educational technology company 2U for $800 million in 2021 — with the stipulation that the platform be run as a public benefit entity that also offers its courses as ‘free to audit.’

    While Prof. Malan did acknowledge that ‘early incarnations’ of AI programs like ChatGPT have been likely to ‘occasionally underperform or even err,’ he nevertheless voiced his belief that his own AI teaching assistant will cut down on busy work. 

    ‘[A]ssessing, more qualitatively, the design of students’ code has remained human-intensive,’ Malan said. 

    ‘Through AI, we hope to reduce that time spent, so as to reallocate [teaching fellows’] time toward more meaningful, interpersonal time with their students, akin to an apprenticeship model.’

    College, as the saying goes, is not about teaching students what to think, but how to think — and Malan’s parting comments on the new CS50 bot echoed this teaching philosophy.

    ‘We’ll make clear to students that they should always think critically when taking in information as input,’ Malan said, ‘be it from humans or software.’

     

  • Japan’s population decreases as one city turns to ChatGPT

    Japan’s population decreases as one city turns to ChatGPT

    ChatGPT has been used to create student essays, write wedding vows, and create stirring sermons for pastors and rabbis in the five months since its introduction.

    A Japanese city is now using the AI chatbot for yet another purpose: aiding in the management of the government.

    A spokesperson from the municipal government told CNN the nationwide population crisis was a factor they considered when implementing the use of ChatGPT.

    Aging Japan’s population has been rapidly falling for years, with the country’s leader warning recently that “time is running out to procreate,” and that Japan is “on the brink of not being able to maintain social functions.”

    Yokosuka is no exception. The city’s population of 376,171 is expected to keep shrinking, the natural causes exacerbated by the departure of major manufacturers and insufficient tourism, according to the government site.

    In the face of these population problems, the city turned to ChatGPT to enhance efficiency and establish a better workflow within government operations, said the spokesperson.

    With ChatGPT handling rote administrative tasks, “staff can focus on work that can only be done by people, pushing forward an approach that brings happiness for our citizens,” said the news release.

    It added that the government anticipates the tool will be “used widely among our staff.” No confidential or personal information will be entered into ChatGPT, it said.

    But not every government has been as welcoming to ChatGPT.

    There have been widespread data privacy concerns, prompting Italian regulators to issue a temporary ban on the chatbot last month as they investigate how its parent company uses data.

    Some big companies, including JPMorgan Chase, have clamped down on employees’ use of ChatGPT due to compliance concerns related to employees’ use of third-party software.

    The scramble by rival tech companies to develop their own AI tools has also highlighted the ways AI can spit out racist, sexist and harmful content.

    But at least in Yokosuka, government leaders are focusing on the positive – with the news release saying it has high expectations for the roll-out.

    At the bottom of the document, a single line read: “This release was drafted by ChatGPT and proofread by our staff.”

    This Monday, Yokosuka City, in the Kanagawa prefecture in central Japan, declared that it would start utilising ChatGPT to assist with administrative duties. According to a press release posted on the local government’s website, the chatbot could be used by all staff to “summarise sentences, check spelling errors, and create ideas.”

  • Alibaba founder Jack Ma is back in China after a lengthy absence

    Alibaba founder Jack Ma is back in China after a lengthy absence

    Founder of Alibaba, Jack Ma, has reportedly reappeared at a school in Hangzhou after disappearing for three years.

    Since 2020, when the 58-year-old began to criticise China’s financial regulators, he has maintained a low profile.

    The most well-known Chinese billionaire to vanish during a crackdown on tech entrepreneurs was Mr. Ma.

    According to the South China Morning Post, he recently made his way back to China after spending more than a year abroad.

    He made a brief stopover in Hong Kong, where he met friends and also briefly visited Art Basel, an international art fair, according to the Alibaba-owned newspaper.

    It added that Mr Ma has been travelling to different countries to learn about agricultural technology, but made no reference as to why he had disappeared from public view in recent years.

    Mr Ma, a former English teacher, met staff and toured classrooms at the Yungu School in Hangzhou, the city in which Alibaba is headquartered.

    He talked about the potential challenges of artificial intelligence to education, according to the school’s social media page.

    “ChatGPT and similar technologies are just the beginning of the AI era. We should use artificial intelligence to solve problems instead of being controlled by it,” he said.

    Once the richest man in China, Mr Ma gave up control of financial technology giant Ant Group in January this year.

    It was seen by some commentators as further evidence that he had fallen foul of the Chinese Communist Party for becoming outspoken and too powerful.

    In October 2020, Mr Ma told a financial conference that traditional banks had a “pawn-shop mentality”.

    The following month, Ant’s planned £26bn stock market flotation, which would have been the world’s largest, was cancelled at the last minute by Chinese authorities, who cited “major issues” over regulating the firm.

    Since then, there have been reported sightings of him in various countries including Spain, the Netherlands, Thailand and Australia.

    Last November, the Financial Times newspaper reported that Mr Ma had been living in Tokyo, Japan for six months.

    When Mr Ma first stopped making public appearances, it was rumoured that he had been placed under house arrest or had been otherwise detained.

  • Google reveals AI features in Gmail, Docs, and more to compete with Microsoft

    Google reveals AI features in Gmail, Docs, and more to compete with Microsoft

    A number of future generative AI features for Google’s Workspace apps, which include Google Docs, Gmail, Sheets, and Slides, have been unveiled.

    The features include new ways to generate, summarize, and brainstorm text with AI in Google Docs (similar to how many people use OpenAI’s ChatGPT), the option to generate full emails in Gmail based on users’ brief bullet points, and the ability to produce AI imagery, audio, and video to illustrate presentations in Slides (similar to features in both Microsoft Designer, powered by OpenAI’s DALL-E, and Canva, powered by Stable Diffusion).

    The announcement shows Google’s eagerness to catch up to competitors in the new AI race. Ever since the arrival of ChatGPT last year and Microsoft’s launch of its chatbot-enabled Bing this February, the search giant has been scrambling to launch similar AI features. The company reportedly declared a “code red” in December, with senior management telling staff to add AI tools to all its user products, which are used by billions of people, in a matter of months.

    But Google is definitely racing ahead of itself. Although the company has announced a raft of new features, only the first of these — AI writing tools in Docs and Gmail — will be made available to a group of US-based “trusted testers” this month. (This is also how Google announced availability for ChatGPT rival Bard.) Google says these and other features will then be made available to the public later in the year but didn’t specify when.

    You can see below the full list of AI-powered features Google says will be coming to Workspace apps in the future:

    • Draft, reply, summarize, and prioritize your Gmail
    • Brainstorm, proofread, write, and rewrite in Docs
    • Bring your creative vision to life with auto-generated images, audio, and video in Slides
    • Go from raw data to insights and analysis via auto-completion, formula generation, and contextual categorization in Sheets
    • Generate new backgrounds and capture notes in Meet
    • Enable workflows for getting things done in Chat
    A GIF showing an AI assistant generating a job description in Google Docs.

    An example of AI in Google Docs turning a prompt into a full job description. Image: Google

    Of all the new features, the AI writing and brainstorming tools in Docs and Gmail seem the most potentially useful. In a sample demo (GIF above), a user is shown the prompt “Help me write” and then enters a request: “Job post for a regional sales rep.” The AI system then completes the job spec for them in seconds, letting them edit and refine the text.

    Google expands on these potential functions in its press release: “Whether you’re a busy HR professional who needs to create customized job descriptions, or a parent drafting the invitation for your child’s pirate-themed birthday party, Workspace saves you the time and effort of writing that first version. Simply type a topic you’d like to write about, and a draft will instantly be generated for you. With your collaborative Al partner you can continue to refine and edit, getting more suggestions as needed.”

    A similar feature will let users rewrite text or expand it using AI tools. So, says Google, you might jot down a few bullet points about a work meeting. Google Docs can then expand this into a “more polished summary,” with users able to manually specify the tone (whether it should be “more whimsical” or “formal,” for example). In a video demo, Google shows AI being used to write personalized marketing messages for clients, turning bullet points into a full email, and summarizing the contents of a long email chain in Gmail. (Again, these are somewhat familiar features. Slack recently announced it will use ChatGPT to create similar summaries of discussions, for example.)

    One sample use case shows AI in Gmail turning bullet points into a full and formal email.

    A GIF showing Google’s AI tools in Gmail expanding on bullet points.

    It’s notable that Microsoft is rumored to be building similar features into its Office suite of apps, including Word, Teams, and Outlook. Microsoft famously unsettled Google this year with the launch of the new Bing. CEO Satya Nadella described AI-assisted search as a new paradigm that could unseat Google from its throne. But it seems the two companies will also be competing in the world of productivity software. Microsoft has scheduled an event where it will detail its plans for “the future of work with AI” later this week on March 16th.

    Of course, the rush to launch AI products has its dangers, too. AI text generating programs are notoriously unreliable, often “hallucinating” false information and presenting it with utter confidence. They’re also prone to regurgitating racial and gendered biases present in their training data.

    As Google integrates this technology into its enterprise software, these failings could cause major issues. What if Google’s AI summaries of your meetings misattribute quotations or ideas, for example? Or if your AI-generated marketing emails invent new clients or products? In its press release today, Google offered a standard disclaimer: “Sometimes the Al gets things wrong, sometimes it delights you with something offbeat, and oftentimes, it requires guidance.” But while users may see the funny side of Microsoft’s Bing chatbot going off the rails, they may take less kindly to an “offbeat” AI that costs them money.

  • Unusual: a Colombian judge consults ChatGPT to rule

    Unusual: a Colombian judge consults ChatGPT to rule

    A judge caused a stir in Colombia by announcing that he used the artificial intelligence chatbot ChatGPT to rule on a case concerning an autistic child, we learned Thursday from concordant sources.

    “This opens up immense prospects, today it could be ChatGPT, but in three months it could be any other alternative to facilitate the drafting of legal texts on which the judge can rely”, declared Judge Juan Manuel Padilla, on local radio. “However, the goal is not to replace the judges,” he stressed.

    In a Jan. 30 ruling, he ruled on a mother’s request for her autistic son to be exempted from paying for medical appointments, treatment and transportation to hospitals, as the family lacked the resources. funds needed to pay them.

    Mr Padilla ruled in favour of the child and indicated in his judgment that he questioned the chatbot ChatGPT to render his decision.

    “Is the autistic minor exempted from paying moderation fees for his therapies?”, asked the judge, according to the transcript of his decision. And the app replied, “Yes, that’s correct. Under Colombian law, minors diagnosed with autism are exempt from paying moderation fees for their therapies. “

    “Judges are not fools, it is not because we ask questions to the application that we cease to be judges, thinking beings”, commented Mr Padilla.

    According to him, ChatGPT today does what was previously provided by “a secretary”, “in an organized, simple and structured way”, which “could improve response times in the judicial sector”.

    These statements sparked a lively debate.

    Professor Juan David Gutiérrez, of Rosario University, explained in particular that he received different answers after asking the same questions. “As with other AIs in other fields, under the pretext of supposed efficiency, fundamental rights are put at risk,” he warned.

    ChatGPT artificial intelligence has been causing a sensation in the world since November. Created by the Californian company OpenAI, the chatbot ChatGPT works on the basis of algorithms and huge databases.

    It produces texts on simple requests, which can be used in particular by lawyers, engineers or journalists, with the risk of manipulation or misinformation.

    “I suspect a lot of my colleagues will jump in and start ethically crafting their judgments with the help of artificial intelligence,” Padilla said.

    Source: Africa News

  • Kenyans recieve $2 per hour to make ChatGPT less harmful – Report

    Kenyans recieve $2 per hour to make ChatGPT less harmful – Report

    A recent investigation by Time Magazine has revealed that a company that does artificial intelligence research, OpenAI, paid Kenyan workers less than $2 (£1.60) to make its ChatGPT chatbot less poisonous.

    The workers were tasked to help build a filter system that would make ChatGPT suitable for everyday use, Time reported.

    They were forced to read graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    Their working conditions and pay are considered exploitative even as their work contributes to billion-dollar industries.

    OpenAI outsourcing partner in Kenya was Sama, a San Francisco-based company that counts Google, Microsoft, Salesforce and Yahoo among its clients.

    The Kenyan workers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance, Time reported.

    A spokesperson for Sama is quoted by Times as saying that employees were entitled to both individual and group sessions with professionally-trained and licensed mental health therapists.

    Sama cancelled all its work for OpenAI in February 2022.

    Source: BBC