Tag: Artificial Intelligence

  • Don’t indoctrinate students with your political ideologies – Afenyo-Markin to teachers

    Don’t indoctrinate students with your political ideologies – Afenyo-Markin to teachers

    Member of Parliament for Effutu Alexander Afenyo-Markin has called on local teachers to avoid influencing students with their political beliefs as the 2024 December elections approach.

    He emphasized that such actions can create division and tension within schools.

    During a laptop distribution event for teachers in Effutu, part of his one-teacher-one-laptop initiative, Afenyo-Markin stated, “In this election year, everyone is entitled to their political opinions. However, as educators, it’s crucial to maintain a harmonious environment. Engage in political discussions if you must, but keep them civil and avoid causing unnecessary strife. Don’t let the political climate extend to your students.”

    He further urged, “Express your views and offer critiques in a constructive manner. The politics of insults are detrimental to our country. We should be able to debate respectfully, like I do with my colleague Dr. Ato Forson, and still enjoy a cup of tea, coffee, or ‘waakye’ together afterward. That’s the essence of democracy. This is common practice in more developed democracies.”

    Afenyo-Markin also encouraged teachers to maximize the use of the tablets provided, noting, “We want to support you in educating our children. In this age of Artificial Intelligence, it’s vital to have the right tools to stay ahead. Without them, students might outpace you in knowledge. Utilize these resources, research, and pass on your findings. Effutu is well-equipped with libraries, offering ample materials to aid in educating the youth,” he concluded.

  • 40% of global labor force will be affected by Artificial Intelligence – IMF

    40% of global labor force will be affected by Artificial Intelligence – IMF

    The International Monetary Fund (IMF) has forecasted that the introduction of Artificial Intelligence (AI) will impact 40 percent of the global workforce.

    While AI has demonstrated improvements in productivity, it is poised to displace numerous jobs in the foreseeable future.

    Speaking at the sidelines of the 2024 IMF-World Bank Spring meeting in Washington DC, Managing Director of the Fund, Kristalina Georgieva emphasized the permanent presence of AI.

    She cited an IMF study revealing that AI’s influence could extend to 40 percent of jobs worldwide, with advanced economies potentially facing a 60 percent impact.

    Georgieva highlighted the dual nature of AI, stating it could enhance worker productivity while simultaneously posing a threat to employment. She stressed the importance of investing in digital infrastructure, skills development, and robust social safety nets to shape the pace of AI adoption and its ramifications on productivity.

    Furthermore, the IMF’s January report outlined potential ramifications on income and wealth inequality within countries due to AI.

    It posited that AI might lead to polarization within income brackets, benefiting workers adept at leveraging AI while leaving others behind.

    The report suggested that younger workers might find it easier to capitalize on AI opportunities, while older workers might face challenges in adaptation.

    Additionally, the report noted that AI’s impact on labor income hinges on its complementarity with high-income workers.

    If AI significantly complements higher-income workers, it could disproportionately increase their labor income and favor high earners. This scenario, coupled with gains in productivity from AI adoption boosting capital returns, could exacerbate inequality.

    In light of these findings, the IMF stressed the need for proactive policymaking to mitigate the potential exacerbation of inequality by AI.

    It advocated for establishing comprehensive social safety nets and offering retraining programs for vulnerable workers to ensure an inclusive AI transition that safeguards livelihoods and curbs inequality.

    “A recent IMF study shows that artificial intelligence could affect up to 40 percent of jobs across the world and 60 percent in advanced economies.

    “It could enhance workers’ productivity but also threatens some jobs. Investing in digital infrastructure and skills, as well as in strong social safety nets will determine the pace of AI adoption and its impact on productivity.”

    The IMF in January this year also predicted that “AI could also affect income and wealth inequality within countries. We may see polarization within income brackets, with workers who can harness AI seeing an increase in their productivity and wages—and those who cannot falling behind. Research shows that AI can help less experienced workers enhance their productivity more quickly. Younger workers may find it easier to exploit opportunities, while older workers could struggle to adapt.

    “The effect on labour income will largely depend on the extent to which AI will complement high-income workers. If AI significantly complements higher-income workers, it may lead to a disproportionate increase in their labour income. Moreover, gains in productivity from firms that adopt AI will likely boost capital returns, which may also favour high earners. Both of these phenomena could exacerbate inequality.

    “In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions. It is crucial for countries to establish comprehensive social safety nets and offer retraining programs for vulnerable workers. In doing so, we can make the AI transition more inclusive, protecting livelihoods and curbing inequality.”

  • Dr Bawumia’s 70 point Agenda (Vision) for Ghana

    Dr Bawumia’s 70 point Agenda (Vision) for Ghana

    1. A growth mindset curriculum to help students build critical skills such as problem solving, risk-taking, opportunity spotting, and design thinking.

    2. Enhance the repositioning of the education system towards STEM, Robotics, Artificial Intelligence, and vocational skills to cope with the demands of the fourth Industrial Revolution and job creation.

    3. Expand infrastructure at medical schools as well as the Ghana Law School to support an increase in admission for students for medical and legal studies.

    4. Enhance fiscal discipline through an independent fiscal responsibility council enshrined in the Fiscal Responsibility Act, 2018 (Act 982).

    5. Reduce the number of Ministers to 50

    6. The Fiscal responsibility Act will also be amended to add a fiscal rule that requires that budgeted expenditure in any year does not exceed 105% of the previous years tax revenue.

    7. Reduce the fiscal burden on government by leveraging the private sector.

    8. Introduce a very simple, citizen and business friendly flat tax regime. A flat tax of a % of income for individuals and SMEs (which constitute 98% of all businesses in Ghana) with appropriate exemption thresholds set to protect the poor.

    9. Tax amnesty

    10. Electronic and faceless audits by GRA

    11. No taxes on digital payments. The e-levy will therefore be abolished.

    12. No VAT on electricity (if still on books)

    13. No emissions tax and

    14. No betting tax

    15. Tema port will be fully automated.

    16. A new policy of aligning the duties and charges at Tema port to the duties and charges at Lome Port

    17. Spare parts importers duties will be at a flat rate per container (20 or 40 foot).

    18. Collaboration with the private sector, we will train at least 1,000,000 youth in IT skills, including software developers to provide job opportunities worldwide.

    19. Empower the private sector to create modern,  sustainable and well-paying  jobs for the youth.

    20. Reduce the cost of Data by working with industry players in setting clear policy guidelines that will remove any investor uncertainty and difficulties in business planning.

    21. Expeditious allocation of spectrum.

    22. Make it easy for Ghanaians to obtain passports, under my government, any Ghanacard holder will only have to pay a fee for a passport.

    23. an e-visa policy for all international visitors to Ghana to enable visas to be obtained in minutes subject to security and criminal checks.

    24. Attain food security through the application of technology and irrigation to commercial large-scale farming.

    25. Promote the use of agricultural lime to reduce the acidity of our soils, enhance soil fertility and get more yield from the application of fertilizers.

    26. Prioritize the construction of the Pwalugu Dam by using private sector financing to crowd in grant financing.

    27. Adoption of electric vehicles for public transportation.

    28. Partner with the private sector to build large housing estates without the government having to borrow or spend.

    29. National Rental Assistance scheme (which is working so well) will be enhanced to deal with the problem of demands for rent advance of two years and more.

    30. Diversify the generation mix by introducing 2000MW of solar power and additional wind power through independent power producers.

    31. More private sector participation in generation and retail.

    32. No import duty on solar panels.

    33. License all miners doing responsible mining.

    34. As long as miners mine within the limits of their licenses (e.g No mining in river or water bodies), there will no longer be any seizure or burning of excavators.

    35. Fully decentralize the minerals commission as well as Environmental Protection Agency (EPA) and ensure that they are present in all mining districts.

    36. Collaborate with the large mining companies, convert abandoned shafts into community mining schemes.

    37. Open more new community mining schemes.

    38. District mining committees should be responsible for reclamation and replanting.

    39. Pension scheme for small scale miners like we have done for cocoa farmers.

    40. Introduce vocational and Skills training on sustainable mining for small scale miners in the curriculum of TVET institutions.

    41. Provide equipment to government authorities in mining communities to undertake reclamation of land.

    42. We will set up state of the art common user gold processing units in mining districts in collaboration with the private sector.

    43. Conduct an audit of all concessions with various licenses and new applications.

    44. Abolish the VAT on exploration services (like assaying) to encourage more exploration.

    45. Establish, in collaboration with the private sector, a Minerals Development Bank to support the mining industry.

    46. Establish (through the private sector) a London Bullion Market Association (LBMA) certified gold refinery in Ghana within four years.

    47. All responsibly mined small scale gold produced will be sold to the central bank, PMMC or MIIF and will be required to be refined before export

    48. Engage exploration experts from the universities and geological Institutions to assist in exploring our seven gold belts.

    49. Provide the Geological Survey Department and our universities with resources annually to undertake a mapping of areas where we have gold reserves.

    50. Build Ghana’s gold reserves appreciably to reach a point when we have sufficient gold reserves to keep our external payments position sustainably strong.

    51. Protect local industry from smuggled imports that evade import duties.

    52. Special Economic Zones ( Free Zones) will also be created in collaboration with the private sector at Ghana’s major border  towns such as Aflao, Paga, Elubo, Sankasi and Tatale to enhance economic activity, increase exports, reduce smuggling and create jobs.

    53. Individualized credit scoring

    54. Digitalization of land titling and transfer

    55. Propose to amend Article 87 of the 1992 Constitution as well as the NDPC Act (Act 479) to mandate political party manifestoes, and consequently Economic and Social policies of governments, as well as budgets, to be aligned to the agreed on broad contours in specific sectors.

    56. Amend the 1992 Constitution with key emphasis on issues such as reducing the power of the President and empower other institutions, ex gratia, the rights of dual citizens, election of MMDCEs to deepen decentralization, among others with extensive public consultation.

    57. Prioritise the creation of incentives for corporate sponsorship as a sustainable module of financing sports development and promotion for our national teams.

    58. Establish the Ghana School Sports Secretariat, which will be an agency under the ministry responsible for sports, in collaboration with other stakeholders such as the GES and sports federations.

    59. Leverage technology, data and systems to improve healthcare.

    60. Expand infrastructure at medical schools and improve human capital development.

    61. Introduce digital and streaming platforms for our artists to make tourism and the creative arts a growth pole in Ghana.

    62. Tax incentives will also be provided for film producers and musicians.

    63. Implement a visa-on-arrival policy for all international visitors to Ghana as has recently been implemented by Kenya.

    64. Recruit 1,000 special education teachers and retrain teachers on how to work with special needs students.

    65. Train more speech and language therapists and occupational and behavioural therapists.

    66. Fiscal and administrative decentralization

    67. Empower the private sector to build roads, hospitals, and schools.

    68. Prioritize the full implementation of the Affirmative Action Act as should hopefully have been passed by January 2025.

    69. After completion of their education, those that can secure jobs would be exempted from national service. National service will no longer be mandatory.

    70. Seek school-level collaboration with international sports bodies like the NBA and NFL to make Ghana a hub for these emerging sports in Africa, to create more opportunities for young people. Collaboration with the private sector, we will train at least 1,000,000 youth in IT skills, including software developers to provide job opportunities worldwide.

  • National systems, structures must be redesigned to maximize AI use – IMANI Ghana

    National systems, structures must be redesigned to maximize AI use – IMANI Ghana

    Vice President of IMANI-Ghana, Selorm Branttie, has urged the government to undertake a comprehensive redesign of its systems and structures to optimize the utilization of Artificial Intelligence (AI).

    Drawing attention to an IMF index report assessing countries’ preparedness for AI, Mr Branttie highlighted Ghana’s placement in the lower third quadrant.

    This positioning indicates a significant lack of readiness for the evolving landscape of artificial intelligence.

    He criticized the government’s persistent inclusion of human elements in automated processes, attributing it to prioritizing procurement benefits over efficiency.

    Speaking on the Joy Super Morning Show, Mr Branttie pointed out that in certain service industries, unnecessary emphasis on human involvement has been maintained, contributing to inefficiencies for the benefit of specific parties.

    He emphasized the importance of aligning these decisions with efficiency goals rather than procurement advantages.

    “For example, in the service industry in areas where you’ll just need to run simple services we have overemphasised the human elements just because we like those inefficiencies to exist in order that we gain or some certain parties gain, and if these things are not done in the name of efficiency rather than in the name of procurement benefits,” he said.

    “Now when it comes to AI we are at a point where there’s a turning point, we’re at a very critical tipping point where if we decide to really begin to use some of those systems and have a deliberate approach to it, it could dramatically change our environment.

    Mr Branttie stressed the pivotal role of the government in leading the charge towards the integration of AI into existing structures to drive enhanced efficiency.

    He highlighted a critical turning point, emphasizing that a deliberate approach to the adoption of AI systems could dramatically transform the national environment.

    “The IMF report that Winston and you guys talked about earlier this morning actually shows of [inaudible] Ghana in the lower income category where our preparedness for Ai is much on the lower side. In an index of 125 countries, we are in the lower quadrant or the lower third or so, so we’re not at the point where our systems are being designed to maximize the use of Ai as it should,” he said.

    “The issues here, or the issues for us in Ghana and Africa is that one, we’re not building enough data sets to feed into AI models to generate the things that are relevant to our environment.

    Referring to the IMF report discussed earlier, Branttie highlighted Ghana’s placement in the lower income category in terms of preparedness for AI. Among 125 countries assessed, Ghana falls into the lower third quadrant, indicating a significant gap in incorporating AI as a strategic asset.

    “So even now if you look at a lot of these AI systems, they’re more attuned to what will be culturally or informationally represent the West’s outlook on things or an American or European outlook on things.

    “And you’d find very little nuance on African views or how we think or how we process our thoughts, our language, our culture, etc. and beyond that we have to look at it in many ways,” he said.

    One of the challenges identified by Branttie is the lack of locally developed data sets to feed into AI models. Currently, most AI technology is designed for a Western audience, resulting in a limited representation of African perspectives, cultures, and languages.

    To address this gap, he emphasized the urgent need for Ghana and Africa to build robust local data sets, ensuring that AI models reflect and understand the nuanced views and thought processes unique to the continent.

  • AI expected to cause 40% reduction in global employment – IMF projects

    AI expected to cause 40% reduction in global employment – IMF projects

    The International Monetary Fund (IMF) has forecasted a significant transformation in the global job market caused by Artificial Intelligence (AI). 

    IMF is foreseeing a 40% reduction in employment worldwide due to the rapid advancement of AI.

    As technology continues to evolve at an unprecedented pace, the IMF’s latest report sheds light on the potential impact of AI on various industries and occupations, raising concerns about the future of work for millions around the world.

    The IMF underscores that the integration of AI technologies into diverse sectors, ranging from manufacturing to service industries, is poised to bring about substantial changes in labor dynamics. 

    Almost 40.0% of global employment is exposed to Artificial Intelligence, the International Monetary Fund has stated in its new analysis.

    Historically, it said, automation and information technology have tended to affect routine tasks, but one of the things that sets AI apart is its ability to impact high-skilled jobs.

    As a result, advanced economies face greater risks from AI—but also more opportunities to leverage its benefits—compared with emerging market and developing economies.

    In advanced economies, about 60% of jobs may be impacted by AI. Roughly half the exposed jobs may benefit from AI integration, enhancing productivity.

    For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear.

    In emerging markets and low-income countries, by contrast, AI exposure is expected to be 40 percent and 26 percent, respectively. These findings suggest emerging markets and developing economies face fewer immediate disruptions from AI.

    At the same time, the report said, many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality among nations.

  • Biden issues warning about risks of artificial intelligence

    Biden issues warning about risks of artificial intelligence

    Biden says that artificial intelligence must be “safe” before it is made available to people. He says that governments should control this technology and not let the technology control the governments.

    “He says we should use these tools as chances instead of using them to control others. ” Please simplify this text: “Rewrite this text in simple words. ”

    Last week, important people in the technology industry, like Elon Musk and Mark Zuckerberg, had a meeting with Congress to discuss rules and laws related to technology.

  • Ways to spot an AI Cheater

    Ways to spot an AI Cheater

    “Labyrinthian mazes”. I don’t know what exactly struck me about these two words, but they caused me to pause for a moment.

    As I read on, however, my alarm bells started to ring. I was judging a science-writing competition for 14-16 year-olds, but in this particular essay, there was a sophistication in the language that seemed unlikely from a teenager.

    I ran the essay through Artificial Intelligence (AI) detection software. Within seconds, Copyleaks displayed the result on my screen and it was deeply disappointing: 95.9% of the text was likely AI-generated. I needed to be sure, so I ran it through another tool: Sapling, which identified 96.1% non-human text.

    A third confirmed the first two, but was slightly lower in its scoring: 89% AI. So then I ran it through yet another software called Winston AI. It left no doubt: 1% human. Four separate AI detection softwares all had one clear message: this is an AI cheater.

    I had known for some time that AI-written content was causing serious challenges to many industries, including my own profession of journalism. Yet here I was, caught by surprise because a student thought it would be acceptable to submit an AI-drafted entry for a writing competition.

    Of course, students trying to cheat isn’t anything new. What struck me was the possibility that the intentional use of AI could be more widespread than I had realised. Staring at the fake student essay before me, I couldn’t help but worry. As a mother to a young eight-year-old child with a whole lot of educational journey still before her, seeing AI used by a school-child caused me great concern about the integrity and value of the learning process in the future.

    So, how might we spot the AI cheaters? Could there be cues and tells? Fortunately, new tools are emerging. However, as I would soon discover, the problem of AI fakery spans beyond the world of education – and technology alone won’t be enough to respond to this change.AI-written text contains patterns that can be spotted by other tools - but they are not foolproof (Credit: Getty Images)

    AI-written text contains patterns that can be spotted by other tools – but they are not foolproof (Credit: Getty Images)

    In the case of student cheating, the reassuring news is that teachers and educators already have existing tools and strategies that could help them check essays. For example, Turnitin, a plagiarism prevention software company that is used by educational institutions, released AI writing detection in April. Its CEO Chris Caren told me that the software’s false positive rate (when it wrongly identifies human-written text as AI) stands at 1%.

    There are also web tools like the ones I used to check my student essay, like Copyleaks, Sapling, and Winston AI, or others like GPTZero and the “AI classifier” released by OpenAI, the creator of ChatGPT. Most are free to use: you simply paste in text on their websites for a result.

    How can AI detect another AI? The short answer is pattern recognition. The longer answer is that checkers use unique identifiers that differentiate human writing from computer-generated text. “Perplexity” and “Burstiness” are perhaps the two key metrics in AI text-sleuthing.

    Perplexity measures how well a language model performs in writing good, grammatically correct, possible sentences – in short how well it predicts the next word. Humans tend to write with different perplexity to AIs, with more unpredictable and diverse sentences.

    Burstiness refers to the variance of the sentences. In written text, AI tends to be more uniform across the board: its sentence structure and lengths are generally regular and less creative in its word choice and usage of phrases. The frequency and combination of terms, repeated phrases and sentence structures create clusters that lack the variation of an extended vocabulary and flourishing style that a human-written text would normally display.In one study, English drafted by Chinese writers was more likely to be identified as AI (Credit: Getty Images)

    In one study, English drafted by Chinese writers was more likely to be identified as AI (Credit: Getty Images)

    However, AI is getting ever-better at sounding human. And already it’s clear that these spotting tools are not foolproof. In a recent paper by researchers at Stanford University, GPT detectors showed bias against non-native English writers. They evaluated the performance of seven widely used GPT detectors on 91 TOEFL (Test of English as a Foreign Language) essays from a Chinese forum and 88 US eighth-grade essays from the Hewlett Foundation’s ASAP (Automated Student Assessment Prize) dataset. The detectors accurately measured the US student essay but falsely labelled more than half of the TOEFL essays as “AI-generated” (average false-positive rate: 61.3%). 

    To GPTZero’s CEO Edward Tian, detection is only half the solution. He believes that the cure to irresponsible AI usage is not detection, but in new writing verification tools. This would help to restore transparency in the writing process, he says. His vision is of enabled students who transparently and responsibly disclose AI involvement as they write. “We started building the first human verification tool for students to prove they are the writer,” Tian says.

    Human in the loop

    Here is the real challenge for humans as AI-produced writing spreads: we probably cannot rely on tech to spot it. A sceptical, inquisitive attitude toward information, which routinely stress-tests its veracity, is therefore important. After all, I only thought to check my student essay with an AI-checker because I was suspicious in the first place.

    The war on disinformation has already shown us that automated tools alone do not suffice, and we need humans in the loop. One person who has seen this first-hand is Catherine Holmes, legal director of the Foreign, Commonwealth and Development Office at Whitehall, who has been working within UK’s national security departments for decades. When seeking to corroborate information that could be false, she says, people’s judgement remains vital. “You are trying to figure out whether this bit of information is actually accurate based on a human being’s actual insight.”

    It’s the same in the world of fraud. At global accounting firm PricewaterhouseCoopers, where forensic services director Rachael Joyce helps clients with investigations into fraud and misconduct, human oversight and insight is a key part of the process: “The human element brings a layer of critique and expertise of context to investigations that AI is not very good at.”

    So, what AI-checking can you do yourself? Over the past few years, I’ve been researching and writing a book called The Truth Detective, which is about how to enhance your critical thinking. Here are some basic questions I’ve learnt that could help you get started with your AI detective work.

    Your first task is to verify. Can you verify and check the sources? Can you check the evidence – both written and visual? How do you do this? Cross-check. If you cannot cross-check or find supporting material from other reputable sources, your suspicions should be raised. “There is this hallucination problem with generated AI where it will make things up,” says Caren from Turnitin. “Factchecking is super important as a consumer of the content or as someone who’s using AI to help them be more productive.”

    The next step is to take a closer look at the text. Some clues can be found in spelling, usage of grammar and punctuation. For now, the default language for AI is still American English. If the spelling and grammar is not appropriate for the publication or the author writing it, ask: why? Does it include quotes? If so who are the quotes by – do these people or institutions exist?  Do this also for any references used, and check what date they are from: a tell is that AI is often still limited in terms of what data source it can access, and it can be unaware of recent news. Are there any references to specific knowledge? A lack of it may indicate fraudulence.

    Finally, check the tone, voice and style of writing. There are linguistic patterns that are still stilted in AI-generated text (at least for now). A particular giveaway is an abrupt change in tone and voice.

    The following example is perhaps a stark reminder that AI can easily make things up that can seem plausible and very real, but absolutely need cross-checking. 

    In June 2023, in what the courts described as an “unprecedented” situation, Steven A Schwartz a lawyer in New York, tried to file a motion that got himself into the hotseat with the judge. Why? The citations and judicial opinions he submitted simply did not exist. He had used ChatGPT, which had assured him the cases were real and that they could be found on legal research sites, such as Westlaw and LexisNexis. As an example, in response to Schlowwartz’s request to “show [him]” evidence for a case, ChatGPT responded: “Certainly! Here’s a brief excerpt…” It then continued to provide extended hallucinated excerpts and favourable quotations. Schwartz said he was mortified. He had believed ChatGPT to be a search engine similar to Google.

    Not all cases will be this blatantly obvious, however. So, as we all glide into an artificially drafted future, it’s clear that a human questioning mindset will be needed. Indeed, our investigative skills and critical thinking techniques could be in more demand than ever before.

    Correction: In an earlier version, Edward Tian’s company was wrongly named. He is the CEO of GPTZero.

    *Alex O’Brien is a freelance journalist and the author of the upcoming book The Truth Detective: A Poker Player’s Guide to a Complex World (Souvenir Press, November 2023).

    DISCLAIMER: Independentghana.com will not be liable for any inaccuracies contained in this article. The views expressed in the article are solely those of the author’s, and do not reflect those of The Independent Ghana

  • Dr Kpodar proposes leveraging Artificial Intelligence as an anti-corruption measure

    Dr Kpodar proposes leveraging Artificial Intelligence as an anti-corruption measure

    Dr Chris Kpodar, Global Artificial Intelligence Specialist, is advocating for the utilisation of artificial intelligence as a powerful anti-corruption tool by reengineering systems to address vulnerabilities that were previously susceptible to bribery and corruption.

    “As a nation, we must adopt Artificial Intelligence as a mechanism to build transparency, integrity, and trustworthiness, which are necessary to fight corruption,” he said.

    Dr Kpodar, who served as Consultant for Africa and the Middle East, was speaking at a forum organised by the Ghana News Agency in Tema and advised governments and companies on investment in these technologies.

    He explained that without effective public scrutiny, the risk of money being lost to corruption and misappropriation was vast.

    He said Artificial Intelligence was the modernisation of all traditional, experienced, and tested methods of investigation that could not be influenced by the machine applied.

    Dr Kpodar, who is also the Executive Director at Solomon Investment Ghana Limited, stressed that even though corruption remained the biggest challenge to society, Artificial Intelligence was the key to solving the problem.

    Dr Kpodar emphasised that the ability of Artificial Intelligence applications to work would radically reduce or eliminate manual operations, which formed fertile grounds for corrupt practice.

    “Artificial Intelligence has the capacity to reveal or even predict corruption or fraud that was previously nearly or completely impossible to detect. If the government wants to be credible in every sector, they must invest in Artificial Intelligence for an everlasting solution to corruption,” Dr Kpodar stated.

    Dr Kpodar, however, commended the government for initiating the digitization system, saying digitisation was a prerequisite for Artificial Intelligence deployment as an anti-corruption weapon.

  • Hollywood actors hit street to protest pay and AI

    Hollywood actors hit street to protest pay and AI

    Over 160,000 performers in Los Angeles halt film and TV productions as they join screenwriters in the industry’s largest shutdown in 60 years.

    The Screen Actors Guild (SAG) demands fair profit sharing and improved working conditions from streaming platforms.

    They also seek protection against the use of digital replicas, ensuring actors are not replaced by artificial intelligence (AI).

    During the strike, actors cannot participate in films or promote completed projects. At the premiere of Christopher Nolan’s Oppenheimer, stars like Cillian Murphy, Matt Damon, and Emily Blunt left in support of the strike. Nolan expressed solidarity, stating that the actors were “off to write their picket signs.”

    Actors, including Bob Odenkirk, Cynthia Nixon, and Jamie Lee Curtis, voiced their support on Instagram. Picketing will commence at Netflix’s California headquarters before targeting Paramount, Warner Bros, and Disney.

    The major studios have proposed safeguards for actors’ digital likeness, requiring their consent for the use of digital replicas or alterations. However, the offer was rejected by the SAG, considering it unacceptable.

    The strike’s impact affects ongoing film productions and limits the availability of actors for reshoots and other essential tasks. TV shows in production will also face significant disruption, although some arrangements may be made to allow limited work to continue.

    Promotional events featuring top Hollywood stars, such as the Emmys and Comic-Con, might be rescheduled or scaled back due to the strike.

    The AMPTP said the strike was “certainly not the outcome we hoped for as studios cannot operate without the performers that bring our TV shows and films to life”.

    “The union has regrettably chosen a path that will lead to financial hardship for countless thousands of people who depend on the industry,” its statement added.

    The union leading the strike is officially recognized as the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA).

    One of the key demands from the streaming services is an increase in both the base pay and residuals for actors. Residuals refer to the payments actors receive from reruns or repeated showings of the films and programs in which they have appeared.

    The strike encompasses tens of thousands of actors who are seeking fairer compensation for minor roles, as they currently receive significantly lower pay compared to their A-list counterparts.

    “In the old model, they get residuals based on success,” Kim Masters, the editor-in-chief of the Hollywood Reporter, told the BBC. “In the new model, they don’t get to find out what’s going on behind the scenes, because the streamers don’t share.”

    Fran Drescher, SAG’s president, said the strike came at a “very seminal moment” for actors in the industry.

    “What’s happening to us is happening across all fields of labour,” she said, “when employers make Wall Street and greed their priority, and they forget about the essential contributors that make the machine run.”

    Since May 2nd, a separate strike led by the Writers Guild of America, consisting of approximately 11,500 members, has been ongoing.

    Writer strike

    The Writers Guild of America has been on strike since May

    The writers are demanding improved pay and working conditions. As a result of the strike, some writers have pursued non-contract projects that fall outside the agreement between the guild and the Alliance of Motion Picture and Television Producers.

  • RAIL Director urges policymakers to prioritize understanding AI before implementing regulations

    RAIL Director urges policymakers to prioritize understanding AI before implementing regulations

    The Scientific Director of the Responsible Artificial Intelligence Lab (RAIL), Prof Jerry John Kponyo is calling for an adequate understanding of the development cycle of Artificial Intelligence for informed regulation.

    “It’s important that before we talk about regulation, we have a very good understanding of what we seek to regulate. Right from the envisioning stage to the deployment stage, there are various aspects of the development cycle of AI solutions that one needs to pay particular attention to,” he said.

    Prof Kponyo comments come on the back of a motion in Ghana’s Parliament to regulate Artificial Intelligence.

    He was speaking at the maiden Africa AI conference in Kigali, Rwanda.

    Prof Kponyo was worried any regulation by external bodies may hamper creativity.

    He believes self-regulation by actors will be beneficial.

    “I will prefer we self-regulate rather than to be regulated by an external body. The fear with external regulation is the stifling of creativity and becoming an obstacle as far as the positives of AI are concerned.

    “Right from the envisioning stage to the deployment stage, they’ll make sure they apply principles with regards to AI solutions to ensure AI deployment is responsible,” he’s optimistic.

    Already, RAIL has developed a framework that serves as a basis for determining whether an AI solution is beneficial or not.

    He, therefore, urged actors in the AI space to familiarise themselves with the framework and other useful frameworks.

  • AI will contribute to job losses – Web & Software CEO

    AI will contribute to job losses – Web & Software CEO

    Ghanaians have been cautioned to prepare themselves for the potential adverse effects of Artificial Intelligence within the realm of work, signaling the need to be ready for the changes and challenges that may arise as AI continues to advance and integrate into various industries.

    Chief Executive Officer of Web & Software, Philip Gamey, has noted that there would be job losses when companies fully adopt the usage of Artificial Intelligence (AI).

    According to him, companies that have a lot of workforces would be trimmed down and the few ones retained would be trained to be technologically inclined and adapt to the change.

    Speaking at a press conference in Accra, Mr Gamey entreated Ghanaian companies to take advantage of AI and use it to their benefit to enhance productivity.

    “Will AI result in job losses? Yes, it will. It is a significant yes. The concerns must not be of immediate priority for most companies but in no time, AI is going to result in a massive amount of job losses,” he stated whiles speaking on the theme ‘Artificial Intelligence – Assessment of strategy and impact on corporate Ghana.’

    He said with the advent of AI 360, it creates human-centered solutions that are tech-powered, and less labor would be needed.

    The Chief Executive Officer of Web & Software entreated corporate companies to use AI in decision-making, making recommendations, analyze data of workers at the end of the year, among others.

    He said only serious companies will leverage AI to outgrow their competitors in terms of revenue whiles they have the best workforce.

  • MPs fear the rise of AI; propose legislation

    MPs fear the rise of AI; propose legislation

    Several Members of Parliament (MPS) have demanded the establishment of laws to control the use of artificial intelligence (AI) in the country.

    Artificial Intelligence tools refer to software applications that employ algorithms based on artificial intelligence to carry out specific tasks and address various challenges.

    During discussions held on the floor of Parliament on Wednesday, some legislators emphasized the substantial benefits offered by AI technologies, however, also stressed the need for regulation to ensure they are used in a manner that aligns with appropriate objectives.

    “If we do not act now the future will be bleak for the future of our country. Probably Mr. Speaker, we should consider establishing an artificial intelligence council,” MP for Tamale South, Haruna Iddrisu, said.

    MP for Ofoase Ayirebi, Kojo Oppong-Nkrumah who doubles as the Information Minister added that “those who worked on AI are beginning to worry about the potential.”

    “So it is opportune time for us to consider what kind of architecture, legal or regulatory to limit the most dangerous parts of AI,” he added.

    Geoffrey Hinton, who is regarded as the godfather of AI after quitting Google expressed concerns over the growth of artificial intelligence in all sectors of an economy.

    “I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. 

    “It would mean the end of people,” he added.

    AI tools find applications across numerous industries such as healthcare, finance, marketing, and education. They serve purposes like task automation, data analysis, and enhanced decision-making capabilities.

  • Can China catch up with US in the AI race

    Can China catch up with US in the AI race

    Artificial intelligence has grown to be a significant enough worry that it was added to the G7 summit’s weekend agenda, which was already jam-packed with important topics.

    Concerns about AI’s harmful impact coincide with the US’ attempts to restrict China’s access to crucial technology.

    For now, the US seems to be ahead in the AI race. And there is already the possibility that current restrictions on semiconductor exports to China could hamper Beijing’s technological progress.

    But China could catch up, according to analysts, as AI solutions take years to be perfected. Chinese internet companies “are arguably more advanced than US internet companies, depending on how you’re measuring advancement,” Kendra Schaefer, head of tech policy research at Trivium China tells the BBC.

    However, she says China’s “ability to manufacture high-end equipment and components is an estimated 10 to 15 years behind global leaders.”

    The Silicon Valley factor

    The US’ biggest advantage is Silicon Valley, arguably the world’s supreme entrepreneurial hotspot. It is the birthplace of technology giants such as Google, Apple and Intel that have helped shape modern life.

    Innovators in the country have been helped by its unique research culture, says Pascale Fung, director of the Center for Artificial Intelligence Research at the Hong Kong University of Science and Technology.

    Researchers often spend years working to improve a technology without a product in mind, Ms Fung says.

    OpenAI, for example, operated as a non-profit company for years as it researched the Transformers machine learning model, which eventually powered ChatGPT.

    “This environment never existed in most Chinese companies. They would build deep learning systems or large language models only after they saw the popularity,” she adds. “This is a fundamental challenge to Chinese AI.”

    US investors have also been supportive of the country’s research push. In 2019, Microsoft said it would put $1bn (£810,000) in to OpenAI.

    “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges,” Microsoft chief executive Satya Nadella said.

    China’s edge

    China, meanwhile, benefits from a larger consumer base. It is the world’s second-most populous country, home to roughly 1.4 billion people.

    It also has a thriving internet sector, says Edith Yeung, a partner at the Race Capital investment firm.

    Nearly everyone in the country uses the super app WeChat, for example. It is used for almost everything from sending text messages, to booking doctor’s appointments and filing taxes.

    As a result, there’s a wealth of information that can be used to improve products. “The AI model is going to be only as good as the data that is available for it to learn from,” Ms Yeung says.

    “For good or bad, China has a lot less rules around privacy, and a lot more data [compared to the US]. There’s CCTV facial recognition everywhere, for example,” she adds. “Imagine how useful that would be for AI-generated images.”

    While China’s tech community may appear to be lagging behind the US, its developers have an edge, according to Lee Kai-Fu, who makes the argument in his book AI Superpowers: China, Silicon Valley, and the New World Order.

    “They live in a world where speed is essential, copying is an accepted practice, and competitors will stop at nothing to win a new market,” wrote Mr Lee, a prominent figure in Beijing’s internet sector and the former head of Google China.

    “This rough-and-tumble environment makes a strong contrast to Silicon Valley, where copying is stigmatised and many companies are allowed to coast on the basis of one original idea or lucky break.”

    China’s copycat era has its problems, including serious issues around intellectual property. Mr Lee writes that it has led to a generation of hardy and nimble entrepreneurs ready to compete.

    Since the 1980s, China has been expanding its economy, which used to be based mainly on manufacturing, to one that is technology-based, Ms Fung says.

    “In the last decade, we have seen more innovation from Chinese consumer-driven internet companies and high-end Chinese designs,” she adds.

    Can China catch up?

    While Chinese tech companies certainly have unique advantages, the full impact of Beijing’s authoritarianism is still unclear.

    There are questions, for instance, about whether censorship would affect development of Chinese AI chatbots. Will they be able to answer sensitive questions about President Xi Jinping?

    “I don’t think anyone in China will ask controversial questions on Baidu or Ernie in the first place. They know it’s censored,” Ms Yeung says. “Sensitive topics are a very small part of the usage [of chatbots]. They just get more media attention,” Ms Fung adds.

    The bigger concern is that US attempts to restrict China’s access to specialised tech can stymie the latter’s AI industry.

    High-performing computer chips, or semiconductors, are now the source of much tension between Washington and Beijing. They are used in everyday products including laptops and smartphones, and could have military applications. They are also crucial to the hardware required for AI learning.

    US companies like Nvidia currently have the lead in developing AI chips and “few [Chinese] companies can compete against ChatGPT” given export restrictions, Ms Fung says.

    While this will hit China’s high-tech industries like cutting edge AI, it won’t affect the the production of consumer technology, such as mobiles and laptops. This is because “the export controls are designed to prevent China from developing advanced AI for military purposes,” Ms Schaefer says.

    To overcome this, China needs its own Silicon Valley – a research culture that attracts talent from diverse backgrounds, Ms Fung says.

    “So far it has relied on both domestic talent and those from overseas with Chinese heritage. There is a limit to homogeneous cultural thinking,” she adds.

    Beijing has been trying to close the gap through its “Big Fund”, which offers massive incentives to chip companies.

    But it has also tightened its grip on the sector. In March, Zhao Weiguo became the latest technology tycoon to be accused of corruption by authorities.

    Beijing’s focus on certain industries can bring financial incentives and loosen red tape, but it may also mean greater scrutiny, and more fear and uncertainty.

    “Zhao’s arrest is a message for other state-owned firms: don’t mess around with state money, particularly in the chip space,” Ms Schaefer says. “Now it’s time to get on with the job.”

    How that message will affect the future of China’s AI industry remains to be seen.

  • AI chatbots may soon be intelligent more than us – Geoffrey Hinton

    AI chatbots may soon be intelligent more than us – Geoffrey Hinton

    A man widely regarded as the father of artificial intelligence (AI) has resigned from his position in order to raise awareness of the mounting risks posed by the field’s advancements.

    Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

    He told the BBC some of the dangers of AI chatbots were “quite scary”.

    “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

    Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: “I’m 75, so it’s time to retire.”

    Dr Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.

    But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.

    “Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.

    “And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

    In the New York Times article, Dr Hinton referred to “bad actors” who would try to use AI for “bad things”.

    When asked by the BBC to elaborate on this, he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario.

    “You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”

    The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”.

    He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.

    “We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

    “And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

    He stressed that he did not want to criticise Google and that the tech giant had been “very responsible”.

    “I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.”

    In a statement, Google’s chief scientist Jeff Dean said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

  • The secret to responsible AI development

    The secret to responsible AI development

    Artificial intelligence development has accelerated significantly in recent months, with generative AI systems like ChatGPT and Midjourney quickly changing a variety of professional activities and creative processes.

    The window of opportunity for guiding the development of this powerful technology in ways that minimize the risks and maximize the benefits is closing fast.

    AI-based capabilities exist along a continuum, with generative AI systems such as GPT-4 (the latest version of ChatGPT) falling within the most advanced category. Given that such systems hold the greatest promise and can lead to the most treacherous pitfalls, they merit particularly close scrutiny by public and private stakeholders.

    Virtually all technological advances have had both positive and negative effects on society. On one hand, they have bolstered economic productivity and income growth, expanded access to information and communication technologies, extended human lifespans, and improved overall well-being. On the other hand, they have led to worker displacement, wage stagnation, greater inequality, and increasing concentration of resources among individuals and corporations.

    AI is no different. Generative AI systems open up abundant opportunities in areas such as product design, content creation, drug discovery and health care, personalized education, and energy optimization. At the same time, they may prove highly disruptive, and even harmful, to our economies and societies.

    The risks already posed by advanced AI, and those that are reasonably foreseeable, are considerable. Beyond widespread reorientation of labor markets, large-language-model systems can increase the spread of disinformation and perpetuate harmful biases. Generative AI also threatens to exacerbate economic inequality. Such systems may even pose existential risks to humankind.

    For some, this is a reason to tap the brakes on AI research. Last month, more than 1,000 AI technologists, from Elon Musk to Steve Wozniak, signed an open letter recommending that AI labs “immediately pause” the training of systems more powerful than GPT-4 for at least six months. During this pause, they argue, a set of shared safety protocols – “rigorously audited and overseen by independent outside experts” – should be devised and implemented.

    The open letter, and the heated debate it has triggered, underscores the urgent need for stakeholders to engage in a wide-ranging good-faith process aimed at aligning on robust shared guidelines for developing and deploying advanced AI.

    Such an effort must account for issues like automation and job displacement, the digital divide, and the concentration of control over technological assets and resources, such as data and computing power. And a top priority must be to work continuously to eliminate systemic biases in AI training, so that systems like ChatGPT do not end up reproducing or even exacerbating them.

    Proposals for AI and digital-services governance are already emerging, including in the United States and the European Union. Organizations like the World Economic Forum are also making contributions.

    In 2021, the Forum launched the Global Coalition for Digital Safety, which aims to unite stakeholders in tackling harmful content online and facilitate the exchange of best practices for regulating online safety. The Forum subsequently created the Digital Trust Initiative, to ensure that advanced technologies like AI are developed with the public’s best interests in mind.

    Now, the Forum is calling for urgent public-private cooperation to address the challenges that have accompanied the emergence of generative AI and to build consensus on the next steps for developing and deploying the technology.

    To facilitate progress, the Forum, in partnership with AI Commons – a nonprofit organization supported by AI practitioners, academia, and NGOs focused on the common good – will hold a global summit on generative AI in San Francisco on April 26-28.

    Stakeholders will discuss the technology’s impact on business, society, and the planet, and work together to devise ways to mitigate negative externalities and deliver safer, more sustainable, and more equitable outcomes.

    Generative AI will change the world, whether we like it or not. At this pivotal moment in the technology’s development, a cooperative approach is essential to enable us to do everything in our power to ensure that the process is aligned with our shared interests and values.

  • The advantages of ChatGPT as a business tool and possible drawbacks

    The advantages of ChatGPT as a business tool and possible drawbacks

    While ChatGPT has stirred up a bit of an artificial intelligence (AI) storm lately, AI is far from a novel idea. Businesses have actually been utilizing it, to varying degrees, for decades, whether to automate routine tasks, analyze data, or provide digital customer support

    What ChatGPT has done, however, is raise awareness of the potential that AI has to add significant value to businesses of all shapes and sizes.

    Chief Information Officer at First National Bank, Samuel Dakurah explained that, as an AI language model trained to understand and respond to human language, ChatGPT has itself become a popular way for many businesses and employees to save time and improve efficiencies.

    “However, while there are many benefits to using ChatGPT, there are also a number of limitations, even risks, that businesses should keep in mind in order to ensure that they are indeed making the most of this exceptional technology,” he said. Samuel shares some pros and cons for users of ChatGPT.

    The first, and arguably most valuable, of these benefits, is the opportunity that ChatGPT presents for businesses to save time when gathering knowledge or doing research. Instead of spending hours scouring the internet or consulting with expensive experts, businesses now have the option to simply ask ChatGPT for information on a particular topic and receive a response in a matter of seconds.

    This can be particularly helpful for small businesses or start-ups that may not have the financial or human resources at their disposal to conduct extensive research on their own. It can be a quick, cheap, and useful way to conduct market research, identify trends, and gather insights into customer behaviours.

    However, it’s important to be aware of the fact that the information ChatGPT provides is highly contextual.

    While ChatGPT can quickly identify trends and patterns, it is unlikely to be able to provide the same level of analysis as a human researcher. Therefore, businesses must balance the benefits of speed and efficiency with the need for accuracy and depth of analysis.

    “For example, if a business asks ChatGPT “What is the best way to win customers between income categories?” the response they will receive is likely to be very generic, without the addition of significant values or real strategic insights that they do not already have.

    The reason is simply that ChatGPT does not know any of the finer nuances about the business, or the geographic and demographic factors that may influence its marketing efforts. So, the response, while probably fairly accurate, will likely be far too broad to be useful”, he cautioned.

    The second benefit of ChatGPT is that it can help businesses improve efficiency by automating certain tasks. For example, ChatGPT may be useful in generating generic content for social media or marketing campaigns, thereby saving businesses time and resources.

    However, this content is once again going to be generic and may not capture the finer marketing requirements of the business. It is unlikely to be aligned with a business’s brand or messaging and it may not always be able to capture the subtleties of the brand voice.

    As a result, businesses need to be careful when using ChatGPT to generate content and ensure that it aligns with their overall branding and messaging strategy.

    The third business benefit of ChatGPT – and possibly the one that has the potential to be of highest value – is that the AI platform can serve as an objective sounding board for those looking to figure out whether they are asking the right questions about their business.

    By asking ChatGPT for information on a particular topic, and then critically assessing the responses they receive, businesses can gain valuable insights into whether they are actually asking the right questions or focusing on the right intelligence that will help them grow and prosper.

    “When asked the right questions, in the right way, ChatGPT may identify patterns and trends that would not be immediately apparent to human researchers. This can help businesses identify blind spots in their operations or generate new ideas for growth. This feature of ChatGPT can be particularly helpful for businesses that are just starting out, or those that are looking to pivot in a new direction based on market movements or competitor pressure,” Samuel explained.

    Of course, it’s important to remember that ChatGPT’s insights are only as good as the questions it is being asked. In other words, if a business is asking the wrong questions or focusing on the wrong information, ChatGPT may be unable to provide the insights they need. This means that any business wanting to derive worthwhile benefits from ChatGPT needs to be highly strategic in its use of the platform.

    Ultimately, it is imperative that business users of ChatGPT keep in mind that it is a tool, not a solution.

    While it can undoubtedly assist with certain tasks, it’s no substitute for human insight and creativity, and it should never be the sole source of business intelligence or market information.

    “As fantastic as ChatGPT may seem, an AI language model is not (yet) a substitute for a holistic business approach, built on relevant quantitative and qualitative data, and underpinned by deep experience and an integrated approach to understanding markets and customers,” he concludes.

  • The state of SMEs in light of new AI trends

    The state of SMEs in light of new AI trends

    Small and medium-sized enterprises (SMEs) have long been the backbone of the global economy.

    They are the engine that drives innovation, job creation, and economic growth. However, SMEs are facing new challenges in the wake of the Fourth Industrial Revolution, which is marked by the emergence of artificial intelligence (AI) and other disruptive technologies.

    In this article, we will explore the future of SMEs in light of an emerging trend in AI.

    First, let us consider the current state of SMEs. According to the World Bank, SMEs represent over 90 percent of businesses worldwide and are responsible for more than half of all employment.

    However, SMEs face a number of challenges, including limited access to finance, inadequate infrastructure and regulatory barriers. These challenges can make it difficult for SMEs to compete with larger companies, which have more resources and economies of scale.

    This is where AI comes in. AI has the potential to level the playing field for SMEs by providing them with access to powerful tools that were once only available to large corporations. AI can help SMEs to automate processes, analyse data, and make better decisions. This can lead to increased efficiency, productivity, and profitability.

    One of the most promising applications of AI for SMEs is in the area of customer service. AI-powered chatbots and virtual assistants can provide 24/7 support to customers, without the need for human intervention. This can help SMEs to provide better customer service, while also reducing costs.

    Another area where AI can benefit SMEs is in marketing and advertising. AI can help SMEs to target their advertising more effectively by analysing customer data and identifying patterns and trends. This can lead to more efficient use of marketing budgets and ultimately, better results.

    AI can also help SMEs to improve their operations. For example, AI-powered supply chain management systems can help SMEs to optimise their inventory levels, reduce waste, and improve delivery times. This can help SMEs to compete more effectively with larger companies, which have already invested in these types of systems.

    However, there are also challenges associated with the adoption of AI for SMEs. One of the biggest challenges is the cost of implementation. AI systems can be expensive to develop and implement, which can make it difficult for SMEs with limited budgets to adopt them.

    Another challenge is the lack of expertise in AI. SMEs may not have the resources to hire AI experts or train their existing staff in AI technologies. This can make it difficult for SMEs to effectively leverage AI in their operations.

    In addition, there are concerns about the impact of AI on jobs. Some experts predict that AI will lead to the automation of many jobs, particularly in industries such as manufacturing and logistics. This could have a negative impact on employment levels, particularly in regions where SMEs are a major source of jobs.

    Despite these challenges, the future looks bright for SMEs that are able to adopt AI. According to a recent report by Accenture, AI has the potential to add US$14trillion to the global economy by 2035.

    SMEs that are able to leverage AI effectively can benefit from this growth, and potentially even outperform larger companies that are slower to adopt AI.

    So, what can SMEs do to prepare for the future of AI? The first step is to educate themselves about AI and its potential applications. SMEs should invest in training for their employees and consider partnering with AI experts or consultants to help them develop an AI strategy.

    SMEs should also consider starting small when it comes to AI adoption. Rather than trying to implement large-scale AI systems all at once, SMEs should focus on specific areas where AI can provide the most value, such as customer service or marketing.

    They can do so by following these steps:

    Identify the areas where AI can provide the most value: SMEs should start by identifying the specific areas of their business where AI can provide the most value. This may involve analysing their operations and identifying areas where AI can improve efficiency, reduce costs, or improve customer satisfaction.

    Set clear goals and objectives: Once SMEs have identified the areas where AI can provide the most value, they should set clear goals and objectives for implementing AI. This may involve defining specific metrics for success, such as reducing response times or increasing sales conversions.

    Start small: SMEs should start small when it comes to AI adoption, focusing on specific use cases or applications where AI can provide the most value. This may involve implementing AI in a single department or for a specific process.

    Leverage existing tools and technologies: SMEs can also leverage existing AI tools and technologies, such as chatbots or customer relationship management (CRM) systems. These tools can be customised to meet the specific needs of the SME, and can provide a cost-effective way to implement AI.

    Invest in training and education: SMEs should invest in training and education to ensure that their employees have the necessary skills and knowledge to effectively leverage AI. This may involve providing training programmes or partnering with AI experts or consultants.

    Monitor and measure performance: SMEs should monitor and measure the performance of their AI systems to ensure that they are achieving their goals and objectives. This may involve tracking metrics, such as response times, customer satisfaction, or sales conversions.

    Finally, SMEs should be aware of the potential impact of AI on their workforce and take steps to mitigate any negative effects. This may involve re-skilling or upskilling employees to take on new roles that are more suited to an AI-enabled economy.

    In conclusion, the emergence of AI presents both opportunities and challenges for SMEs. While the adoption of AI can help SMEs to compete more effectively with larger companies, it also requires significant investment and expertise. SMEs that are able to effectively leverage AI will be well-positioned for success in the coming years, but those that fail to adapt may struggle to keep up.

    It is important for Ghana’s policy-makers and other stakeholders to support SMEs in their efforts to adopt AI. This may involve providing funding or other resources to help SMEs develop AI strategies, as well as promoting education and training programmes to help SMEs build the necessary skills and expertise.

    In the end, the future of SMEs – in light of an emerging trend in AI – depends on their ability to adapt to a rapidly changing economic landscape.

    Those that are able to do so will be well-positioned for success while those that fail to adapt may struggle to survive. The key is to approach AI adoption with caution and to be mindful of both the potential benefits and risks. By doing so, SMEs can ensure that they are prepared for the future of work in an AI-enabled economy.

  • Report shows AI could take over 300 million jobs

    Report shows AI could take over 300 million jobs

    According to a report by investment bank Goldman Sachs, artificial intelligence (AI) might replace the equivalent of 300 million full-time jobs.

    That might result in a quarter of work duties in the US and Europe being replaced, but it might also create new jobs and boost productivity.

    Also, it may eventually result in a 7% annual rise in the value of all goods and services produced globally.

    The report calls generative AI “a major advancement” since it can produce content that is indistinguishable from human-produced stuff.

    The government is keen to promote investment in AI in the UK, which it says will “ultimately drive productivity across the economy”, and has tried to reassure the public about its impact.

    “We want to make sure that AI is complementing the way we work in the UK, not disrupting it – making our jobs better, rather than taking them away,” Technology Secretary Michelle Donelan told the Sun.

    The report notes AI’s impact will vary across different sectors – 46% of tasks in administrative and 44% in legal professions could be automated but only 6% in construction 4% in maintenance, it says.

    According to research cited by the report, 60% of workers are in occupations that did not exist in 1940.

    But other research suggests technological change since the 1980s has displaced workers faster than it has created jobs.

    And if generative AI is like previous information-technology advances, the report concludes, it could reduce employment in the near term.

    The long-term impact of AI, however, was highly uncertain, chief executive of the Resolution Foundation think tank Torsten Bell told BBC News, “so all firm predictions should be taken with a very large pinch of salt”.

    “We do not know how the technology will evolve or how firms will integrate it into how they work,” he said.

    “That’s not to say that AI won’t disrupt the way we work – but we should focus too on the potential living-standards gains from higher-productivity work and cheaper-to-run services, as well as the risk of falling behind if other firms and economies better adapt to technological change.”

    https://www.youtube.com/watch?v=FBX_lI822P4
  • Google reveals AI features in Gmail, Docs, and more to compete with Microsoft

    Google reveals AI features in Gmail, Docs, and more to compete with Microsoft

    A number of future generative AI features for Google’s Workspace apps, which include Google Docs, Gmail, Sheets, and Slides, have been unveiled.

    The features include new ways to generate, summarize, and brainstorm text with AI in Google Docs (similar to how many people use OpenAI’s ChatGPT), the option to generate full emails in Gmail based on users’ brief bullet points, and the ability to produce AI imagery, audio, and video to illustrate presentations in Slides (similar to features in both Microsoft Designer, powered by OpenAI’s DALL-E, and Canva, powered by Stable Diffusion).

    The announcement shows Google’s eagerness to catch up to competitors in the new AI race. Ever since the arrival of ChatGPT last year and Microsoft’s launch of its chatbot-enabled Bing this February, the search giant has been scrambling to launch similar AI features. The company reportedly declared a “code red” in December, with senior management telling staff to add AI tools to all its user products, which are used by billions of people, in a matter of months.

    But Google is definitely racing ahead of itself. Although the company has announced a raft of new features, only the first of these — AI writing tools in Docs and Gmail — will be made available to a group of US-based “trusted testers” this month. (This is also how Google announced availability for ChatGPT rival Bard.) Google says these and other features will then be made available to the public later in the year but didn’t specify when.

    You can see below the full list of AI-powered features Google says will be coming to Workspace apps in the future:

    • Draft, reply, summarize, and prioritize your Gmail
    • Brainstorm, proofread, write, and rewrite in Docs
    • Bring your creative vision to life with auto-generated images, audio, and video in Slides
    • Go from raw data to insights and analysis via auto-completion, formula generation, and contextual categorization in Sheets
    • Generate new backgrounds and capture notes in Meet
    • Enable workflows for getting things done in Chat
    A GIF showing an AI assistant generating a job description in Google Docs.

    An example of AI in Google Docs turning a prompt into a full job description. Image: Google

    Of all the new features, the AI writing and brainstorming tools in Docs and Gmail seem the most potentially useful. In a sample demo (GIF above), a user is shown the prompt “Help me write” and then enters a request: “Job post for a regional sales rep.” The AI system then completes the job spec for them in seconds, letting them edit and refine the text.

    Google expands on these potential functions in its press release: “Whether you’re a busy HR professional who needs to create customized job descriptions, or a parent drafting the invitation for your child’s pirate-themed birthday party, Workspace saves you the time and effort of writing that first version. Simply type a topic you’d like to write about, and a draft will instantly be generated for you. With your collaborative Al partner you can continue to refine and edit, getting more suggestions as needed.”

    A similar feature will let users rewrite text or expand it using AI tools. So, says Google, you might jot down a few bullet points about a work meeting. Google Docs can then expand this into a “more polished summary,” with users able to manually specify the tone (whether it should be “more whimsical” or “formal,” for example). In a video demo, Google shows AI being used to write personalized marketing messages for clients, turning bullet points into a full email, and summarizing the contents of a long email chain in Gmail. (Again, these are somewhat familiar features. Slack recently announced it will use ChatGPT to create similar summaries of discussions, for example.)

    One sample use case shows AI in Gmail turning bullet points into a full and formal email.

    A GIF showing Google’s AI tools in Gmail expanding on bullet points.

    It’s notable that Microsoft is rumored to be building similar features into its Office suite of apps, including Word, Teams, and Outlook. Microsoft famously unsettled Google this year with the launch of the new Bing. CEO Satya Nadella described AI-assisted search as a new paradigm that could unseat Google from its throne. But it seems the two companies will also be competing in the world of productivity software. Microsoft has scheduled an event where it will detail its plans for “the future of work with AI” later this week on March 16th.

    Of course, the rush to launch AI products has its dangers, too. AI text generating programs are notoriously unreliable, often “hallucinating” false information and presenting it with utter confidence. They’re also prone to regurgitating racial and gendered biases present in their training data.

    As Google integrates this technology into its enterprise software, these failings could cause major issues. What if Google’s AI summaries of your meetings misattribute quotations or ideas, for example? Or if your AI-generated marketing emails invent new clients or products? In its press release today, Google offered a standard disclaimer: “Sometimes the Al gets things wrong, sometimes it delights you with something offbeat, and oftentimes, it requires guidance.” But while users may see the funny side of Microsoft’s Bing chatbot going off the rails, they may take less kindly to an “offbeat” AI that costs them money.

  • The Nigerian AI artist reimagining a stylish old age

    The Nigerian AI artist reimagining a stylish old age

    Artworks generated by Artificial Intelligence (AI) have become a source of controversy, but Nigerian filmmaker and artist Malik Afegbua is making a case that it can challenge us to create a better real world – and a more stylish one for older people.

    At first glance, his images look like they were snapped on the edge of a fashion runway, but these models are not actually real people.

    Instead, the pictures are the result of Afegbua’s imagination working in conjunction with AI software, showing older-looking models in beautiful clothes.

    He knew he had created something special after he had posted them on social media.

    Especially after they caught the eye of the Oscar-winning costume designer behind the Black Panther films, Ruth Carter. “This is so dope!!” she wrote on Instagram.

    Models on a catwalk
    Malik Afegbua/SlickCity

    The series of images, called Fashion Show For Seniors, has attracted thousands of similar comments.

    With more than 100,000 likes for the pictures on social media, Afegbua’s work has clearly made an impact in the real world. But questions linger about whether computer-generated work is a threat to human creativity. There are ethical issues as well.

    The artist, though, takes a thoughtful and nuanced approach.

    Man on runway in emerald blazer
    Malik Afegbua/SlickCity

    We are just about to get the Zoom interview started, with Afegbua sitting in his home office in Lagos, Nigeria – when his two-year-old son calls out for a bit of attention.

    “He was born smart and everything he does is so techie. He already knows how to use mobile phones and iPads,” he says proudly.

    It is clear that he is passing down his love of technology and art to his son, but what made this business-school graduate pivot into pursuing a creative career?

    “Someone gifted me a camera and that’s where it took off.”

    He became a filmmaker and now produces commercials, documentary films and virtual reality exhibitions. He also embraced the emergence of AI as a newly leading force in art.

    With his fashion show series, he saw an opportunity to challenge what he sees as the marginalisation of older people in society and wanted to challenge perceptions around ageing.

    “I’ve never seen a fashion show for elderly people, but they exist – so why not?”

    Models on a catwalk
    Malik Afegbua/SlickCity

    One obvious objection is that there are real elderly people, and real fashion designers, who could have been photographed in the real world.

    But for Afegbua it is the aspirational message behind the images that is crucial.

    He believes they can make people think: “What if we start doing things in this way?”

    Woman with a gold dress on runway
    Malik Afegbua/SlickCity

    There has been some backlash against the use of AI in art, centred around whether computers can truly replicate human creativity, but Afegbua sees this as an exciting opportunity for artists to evolve.

    AI image software either takes keywords (called prompts) that are suggested by the artist or uses uploaded photos, to create an image based on that information.

    What Afegbua says he is doing with his work is teaching AI to become more creative and, in turn, he makes new discoveries.

    “Artificial intelligence learns from us and learns from the World Wide Web. I try to learn from it as well. I try to learn how to talk to it, how to communicate better to get exact results from it.”

    African couple walking on runway
    Malik Afegbua/SlickCity

    For the Fashion Show for Seniors pictures, Afegbua went back and forth with several AI-image generators – he uses three different ones for a variety of results – to find a look that was just right for his “models”.

    “I’m a lover of fashion, and I always like to experiment. I wanted to mix traditional African Nigerian fashion with something futuristic, something Afro-futuristic.”

    Another set of pictures, which he calls his Fiction series, is also inspired by an idea of the future – despite dating the world he has created to 250,000 years ago.

    Galvanised by the stylings of Black Panther’s Wakanda army and his new Hollywood pal, Ruth Carter, the collection of images represents the people of Ngochola, an imagined ancient African civilisation.

    African girl with Afro-futuristic face paint
    Malik Afegbua/SlickCity

    “They can speak to machines with their minds because they’ve cracked different codes. They’re very technologically advanced in that they understand how to mix biology with, you know, technology, and combine it together,” he says of the people living in Ngochola.

    Afro-futuristic man on vehicle
    Malik Afegbua/SlickCity

    It is clear that Afegbua is an unapologetic champion for the use of AI in art, but he recognises concerns around its use may be valid.

    Recently there have been complaints that, without acknowledgement, artists’ original work is being used as source material which is then manipulated.

    This is not Afegbua’s method, but he knows that AI can be used like this.

    “When it comes to AI, there are a lot of ethical issues in terms of it stealing other people’s work to create lots of different things,” he admits. “It’s a tool – and every tool can be used in an unethical way.”

    There does not seem to be any let-up in demand for AI-generated images, with the #AIfilter hashtag racking up 1.3 billion views on TikTok, where users have been uploading selfies in return for a new computer-generated picture of themselves.

    Elderly couple on the beach
    Malik Afegbua/SlickCity

    Afegbua is an optimist when it comes to the use of technology in art.

    “I don’t think it has a shelf life. I think it’s only going to get better because the algorithms keep getting better. The engines keep getting better.”

    “I feel that it’s going help shape the storytelling and the intentional picture of Africa now, because it makes things a lot more accessible.”

    Older woman wearing traditional gele on runway
    Malik Afegbua/SlickCity

    In this vein, Afegbua plans to continue developing the Elder series.

    He wants to use the AI technology to help re-imagine what is possible today and in the future.

    All images copyright Malik Afegbua/SlickCity.

  • Google awards a $30k research grant to a UCC lecturer for artificial intelligence

    Google awards a $30k research grant to a UCC lecturer for artificial intelligence

    A lecturer at the University of Cape Coast (UCC), specifically, the Department of Mathematics has been awarded a $30,000 Google research grant to carry out Artificial Intelligence research (AI) research.

    The award was granted to Dr Stephen Moore, who is also a co-founder of Ghana Natural Language Processing (Ghana NLP), to accelerate research in natural language processing (NLP) in low-resource languages in Ghana and Africa.

    Natural Language Processing is a branch of Artificial Intelligence (AI) that is focused on how computers can process languages as humans do.

    Since 2020, Dr Moore and his colleagues at Ghana NLP have been developing tools for both text and speech translation of low-resource languages including Twi, Dagbani, Ewe, Ga, Guruni, Igbo, etc.

    At the re-opening of Google’s new office in Accra, Ghana, in 2022, Dr Moore presented the state of the art of NLP development in Ghana and the opportunities the country will gain by training and developing young people for the future.

    He presented the first Ghanaian Language translator; Khaya, that has been launched by Ghana NLP together with Algorine (a partner company of Ghana NLP).

    The app uses state-of-the-art language models from NLP with the ambition to create a unified translator for several languages in Africa.

    Google gave the gift in recognition of Ghana NLP’s efforts towards both development of such important tools and the training of volunteers at Ghana NLP. 

    Ghana NLP is a social enterprise seeking to make NLP accessible to Ghanaians through training, workshops and seminars. It is the first of such awards Google has granted to any Ghanaian researcher.

  • This guy is using AI to make a movie

    “Salt” resembles many science-fiction films from the ’70s and early ’80s, complete with 35mm footage of space freighters and moody alien landscapes.

    But while it looks like a throwback, the way it was created points to what could be a new frontier for making movies.

    “Salt” is the brainchild of Fabian Stelzer. He’s not a filmmaker, but for the last few months he’s been largely relying on artificial intelligence tools to create this series of short films, which he releases roughly every few weeks on Twitter.
    Stelzer creates images with image-generation tools such as Stable Diffusion, Midjourney and DALL-E 2. He makes voices mostly using AI voice generation tools such as Synthesia or Murf. And he uses GPT-3, a text-generator, to help with the script writing.
    There’s an element of audience participation, too. After each new installment, viewers can vote on what should happen next. Stelzer takes the results of these polls and incorporates them into the plot of future films, which he can spin up more quickly than a traditional filmmaker might since he’s using these AI tools.

    This image, which is part of the "Salt" short-film series by Fabian Stelzer, was created via Stable Diffusion with the prompt "a luxury apartment with large windows overlooking a lush arid mushroom jungle landscape, sci-fi film still, 1980s science fiction, screenshot from a movie".

    “In my little home office studio I can make a ’70s sci-fi movie if I want to,” Stelzer, who lives in Berlin, said in an interview with CNN Business from that studio. “And actually I can do more than a sci-fi movie. I can think about, ‘What’s the movie in this paradigm, where execution is as easy as an idea?’”
    The plot is, at least for now, still vague. As the trailer shows, it generally focuses on a distant planet, Kaplan 3, where an overabundance of what initially appears to be mineral salt leads to perilous situations, such as somehow endangering a nearing spaceship. To make things more confusing (and intriguing), there are also different narrative threads introduced and, perhaps, even some temporal anomalies.

    Many of Stelzer's "Salt" images use terms like "35mm" and "sci-fi." For this one, created with Midjourney, he typed "hi-res 35mm footage of long space ship freighter 1970s sci-fi, dark and beige atmosphere, dark electronics, salt crusts on the hull, sparse LEDs."

    The resulting films are beautiful, mysterious, and ominous. So far, each film is less than two minutes long, in keeping with Twitter’s maximum video length of two minutes and 20 seconds. Occasionally, Stelzer will tweet a still image and a caption that contribute to the series’ strange, otherworldly mythology.
    Just as AI image generators have already unnerved some artists, Stelzer’s experiment offers an early example of how disruptive AI systems could be to moviemaking. As AI tools that can produce images, text, and voices are becoming more powerful and accessible, it could change how we think about idea generation and execution — challenging what it means to create and be a creator. Although the following for these videos is limited, some in the tech space are watching closely and expect more to come.
    “Right now it’s in an embryonic stage, but I have a whole range of ideas of where I want to take this,” Stelzer said.

    “Shadows of ideas and story seeds”

    The idea for “Salt” emerged from Stelzer’s experiments with Midjourney, a powerful, publicly available AI system that users can feed a text prompt and get an image in response.
    The prompts he fed the system generated images that he said “felt like a film world,” depicting things like alien vegetation, a mysterious figure lurking in the shadows, and a weird-looking research station on an arid mining planet. One image included what appeared to be salt crystals, he said.
    “I saw this in front of me and was like, ‘Okay, I don’t know what’s happening in this world, but I know there’s lots of stories, interesting stuff,’” he said. “I saw narrative shades and shadows of ideas and story seeds.”

    For this "Salt" image, Stelzer, used Midjourney with the prompt "film still of a research station on a mining planet, sci-fi atmosphere, beige and dark, 1980s sci-fi movie, tense atmosphere, rare alien plants and vegetation, arid, dusty, fog."

    Stelzer has a background in AI: He co-founded a company called EyeQuant in 2009 that was sold in 2018. But he doesn’t know much about making films, so he started teaching himself with software and created a “Salt” trailer, which he tweeted on June 14 with no written introduction. (The tweet did include a salt-shaker emoji, however.)
    That was followed by what Stelzer calls the first episode a couple days later. He’s put out several so far, along with numerous still images and some brief film clips. Eventually, he hopes to cut the pieces of “Salt” into one feature-length film, he said, and he’s building a related company to make films with AI. He said it takes about half a day to make each film.
    The vintage sci-fi vibe is partly an homage to a genre Stelzer loves and partly a necessity due to the technical limits of AI image generators, which are still not great at producing images with high-fidelity textures.
    To get AI to generate the images, he crafts prompts that include phrases like “a sci-fi research outpost near a mining cave,” “35mm footage,” “dark and beige atmosphere,” and “salt crusts on the wall.”
    The look of the film is also fitting for Stelzer’s editing style as an amateur auteur. Because he’s using AI to generate still images for “Salt,” Stelzer uses some simple techniques to make the scenes feel animated, like jiggling portions of an image to make it appear to move or zooming in and out. It’s crude, but effective.

    “Salt” goes to college

    “Salt” has a small but charmed following online. As of Wednesday, the Twitter account for the film series had roughly 4,500 followers. Some of them have asked Stelzer to show them how he’s making his films, he said.

    To envision this view of the interior of a freighter, Stelzer fed Midjourney the prompt "hi-res 35mm footage of the inside of a large space ship freighter control room, in the center there is a person sitting on a chair, dark and beige atmosphere, dark electronics, salt crusts on the wall, sparse LEDs."

    Savannah Niles, director of product and design at AR and VR experience builder Magnopus, has been following along with “Salt” on Twitter and said she sees it as a prototype of the future of storytelling — when people actively participate and contribute to a narrative that AI helps build. She hopes that tools like those Stelzer uses can eventually make it cheaper and faster to produce films, which today can involve hundreds of people, take several years, and cost millions of dollars.
    “I think that there will be a lot of these coming up, which is exciting,” she said.
    It’s also being used as a teaching aid. David Gunkel, a professor at Northern Illinois University who has been watching the films via Twitter, said he’s previously used a short sci-fi film called “Sunspring” to teach his students about computational creativity.
    Released in 2016 and starring “Silicon Valley” actor Thomas Middleditch, it’s thought to be the first film that used AI to write its script. Now, he’s planning to use “Salt” in his fall-semester communication technology classes, he said.
    “It does create a world you feel engaged in, immersed in,” he said. “I just want to see more of what’s possible, and what will come out of this.”
    Stelzer said he has a “somewhat cohesive” idea of what the overall narrative structure of “Salt” will be, but he isn’t sure he wants to reveal it — in part because the community involvement has already made the story deviate in some ways from what he had planned.
    “I’m actually not sure whether the story I have in my mind will play out like that,” he said. “And the charm of the experiment to me, intellectually, is driven by the curiosity to see what I as the creator and the community can come up with together.”
    Source: CNN
  • Undeclared pools in France uncovered by AI technology

    The discovery of thousands of undeclared private swimming pools in France has provided an unexpected windfall for French tax authorities.

    Following an experiment using artificial intelligence (AI), more than 20,000 hidden pools were discovered.

    They have amassed some €10m ($9.9; £8.5m) in revenue, French media is reporting.

    Pools can lead to higher property taxes because they boost property value, and must be declared under French law.

    The software, developed by Google and French consulting firm Capgemini, spotted the pools on aerial images of nine French regions during a trial in October 2021.

    The regions of Alpes-Maritimes, Var, Bouches-du-Rhône, Ardèche, Rhône, Haute -Savoie, Vendée, Maine-et-Loire and Morbihan were part of the trial – but tax officials say it may now be rolled out nationwide.

    There were more than 3.2 million private swimming pools in France in 2020, according to data website Statista, with sales already booming before the Covid pandemic.

    But as more employees worked from home, there was a further surge in pool installations.

    According to Le Parisien newspaper, an average pool of 30 sq m (322 sq ft) is taxed at €200 ($200; £170) a year.

    The tax authorities say the software could eventually be used to find undeclared home extensions, patios or gazebos, which also play a part in property taxes.

    Antoine Magnant, the deputy director general of public finances, told Le Parisien: “We are particularly targeting house extensions like verandas.

    “But we have to be sure that the software can find buildings with a large footprint and not the dog kennel or the children’s playhouse,” he added.

    The crackdown comes after Julien Bayou, of France’s Europe-Ecology Greens party, did not rule out a ban on new private pools.

    Speaking to BFMTV, he said that France needs a “different relationship to water” and that the ban would be a “last resort”.

    “The challenge is not to ban swimming pools, it is to guarantee our vital water needs,” he said.

    His comments come as France tackles with its worst recorded drought that has left more than 100 municipalities short of drinking water.

    In July, France had just 9.7mm (0.38 inches) of rain, making it the driest month since March 1961, the national weather service Meteo-France said.

    Irrigation has been banned in much of the north-west and south-east of France to conserve water.

    Source: BBC

  • Ghana to get Africas first solar-powered Artificial Intelligence TV

    Ghana will, in some two months, have access to solar powered Artificial Intelligence TV, the first of its kind in the whole of the African continent.

    In an era where countries are constantly looking for alternatives to provide power and energy for household appliances, the television set has come to meet such requirements.

    The move forms part of a new age of energy transition and digital transformation into the artificial intelligence (AI) movement.

    Introduced and produced by a South African tech giant, Agilitee Africa, the digital video broadcasting second generation (DCB-T2) full AI television set will not only help conserve energy, but can easily be used with its voice command feature.

    It can be powered with solar energy increasing its viability in several geographical locations and has an ISO approval.

    The AI television is a generation television that uses telematics for voice recognition and instruction remote system bordering on the regulating a TV with voice recognition.

    Ghana is one of many African countries including Rwanda, Nigeria, and South Africa to benefit from this innovative product.

    CEO of the firm, Dr. Mandla Lamba intimated that the products will start selling in February 2022 in other West African markets.

    “We will do Ghana, Zambia, Kenya, Malawi, Zimbabwe, Swaziland, Tanzania and Namibia in January and start selling on the 1 st of February in all,” he noted.

    Source: Agilitee Africa

  • UK spies will need artificial intelligence – Rusi report

    UK spies will need to use artificial intelligence (AI) to counter a range of threats, an intelligence report says.

    Adversaries are likely to use the technology for attacks in cyberspace and on the political system, and AI will be needed to detect and stop them.

    But AI is unlikely to predict who might be about to be involved in serious crimes, such as terrorism – and will not replace human judgement, it says.

    The report is based on unprecedented access to British intelligence.

    The Royal United Services Institute (Rusi) think tank also argues that the use of AI could give rise to new privacy and human-rights considerations, which will require new guidance.

    The UK’s adversaries “will undoubtedly seek to use AI to attack the UK”, Rusi says in the report – and this may include not just states, but also criminals.

    Fire with fire The future threats could include using AI to develop deep fakes – where a computer can learn to generate convincing faked video of a real person – in order to manipulate public opinion and elections.

    It might also be used to mutate malware for cyber-attacks, making it harder for normal systems to detect – or even to repurpose and control drones to carry out attacks.

    In these cases, AI will be needed to counter AI, the report argues.

    “Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload. It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures,” argues Alexander Babuta, one of the authors.

    The independent report was commissioned by the UK’s GCHQ security service, and had access to much of the country’s intelligence community.

    All three of the UK’s intelligence agencies have made the use of technology and data a priority for the future – and the new head of MI5, Ken McCallum, who takes over this week, has said one of his priorities will be to make greater use of technology, including machine learning.

    However, the authors believe that AI will be of only “limited value” in “predictive intelligence” in fields such as counter-terrorism.

    The often-cited fictional reference is the film Minority Report where technology is used to predict those on the path to commit a crime before they have carried it out.

    But the report argues this is less likely to be viable in real-life national security situations.

    Acts such as terrorism are too infrequent to provide sufficiently large historical datasets to look for patterns – they happen far less often than other criminal acts, such as burglary.

    Even within that data set, the background and ideologies of the perpetrators vary so much that it is hard to build a model of a terrorist profile. There are too many variables to make prediction straightforward, with new events potentially being radically different from previous ones, the report argues.

    Any kind of profiling could also be discriminatory and lead to new human-rights concerns.

    In practice, in fields like counter-terrorism, the report argues that “augmented” – rather than artificial – intelligence will be the norm – where technology helps human analysts sift through and prioritise increasingly large amounts of data, allowing humans to make their own judgements.

    It will be essential to ensure human operators remain accountable for decisions and that AI does not act as a “black box”, from which people do not understand the basis on which decisions are made, the report says.

    Source: reuters.com