At a glance.
- Google announces watermarking technology for synthetic images.
- China using AI to generate content for influence campaigns.
- Internal Russian disagreements: military and academic.
- Pay-to-play in the Russian milblogger space.
- Ukraine's and Russia's presidents have a sharply different view of the state of the war.
- The EU finds Big Tech soft on Russian disinformation.
- Russian television service introduced into the occupied Donetsk.
Google announces watermarking technology for synthetic images.
Google has announced SynthID, a watermarking technology that, operating at the pixel-level, makes it difficult for manipulators of images to remove the mark of their provenance.
Eduardo Azanza, CEO at Veridas, wrote to express approval of Google's development. “This solution is a step in the right direction. As a leading tech giant, Google’s SynthID is demonstrating a path that other influential tech companies should follow in addressing the challenges posed by AI-generated images and videos. However, it is crucial to recognize that regardless of their prominence, Google cannot act alone in this fight. Threat actors are constantly evolving and finding ways to circumvent defensive measures. This calls for the need for a multifaceted approach and the establishment of a universally accepted standard. The focus now needs to shift to fostering cooperation and communication among the biggest tech companies to successfully combat negative AI usage globally.”
China using AI to generate content for influence campaigns.
Microsoft this morning described a trend in Chinese influence operations: using AI images to help make their point. The images may be slick--traffic stoppers--but the approach is old and crude. The AI-generated imagery includes such stuff as the Statue of Liberty depicted as the "goddess of violence," clutching a machine pistol in her left hand. It's better than the old-style cartoons of Uncle Sam as a vampire and so forth, which were indeed badly drawn, but it's fundamentally the same technique. Lady Liberty indeed looks like a menacing, dead-eyed version of her familiar self (but on the other hand the AI gave her right hand six fingers). Current campaigns do show a shift toward the disruptive as opposed to the persuasive. And the images aren't deepfakes, just better executed cartoons.
Internal Russian disagreements: military and academic.
The ISW also reported two instances of internal discord in Russia. The first is military, and indicates morale and command problems in the Russian army ranks. "Select Russian sources claimed that Russian officers of the 58th Combined Arms Army (CAA) defending in Zaporizhia Oblast contacted former 58th CAA commander Major General Ivan Popov due to the worsening situation at the Russian frontline. Russian milbloggers claimed that Popov has maintained contact with his former subordinates in western Zaporizhia Oblast, and a Russian insider source claimed that these officers turned to Popov for help instead of their new commander." General Popov was relieved of command of the 58th Combined Arms Army in July, when he sought to "bypass Chief of the Russian General Staff Army General Valery Gerasimov and bring his complaints about poor counterbattery capabilities, heavy losses, and a lack of rotations directly to Russian President Vladimir Putin."
General Popov wasn't alone in seeing these as issues for the Russian forces. He set an example of military indiscretion that soldiers will find admirable when they perceive their incompetence in their leaders. The ISW adds, "Russian sources claimed that Popov encouraged his former subordinates to report the truth about the front to the higher Russian command, possibly encouraging them to replicate his insubordination. Popov’s contact with his former subordinates, if true, suggests that Popov’s replacement has not won the trust of his subordinates either because he is less competent or because he is less forthright with senior Russian leadership about continuing challenges facing the Russian defense in western Zaporizhia."
The other species of discord comes from hard-war ultras, who have protested an August 29th essay by the Director of the Institute for the Study of the USA and Canada, a think tank with deep roots in the Cold War Soviet Union. The Director, Valery Garbuzov, criticized "Russian ruling elites" for creating "utopian myths" about impending Russian hegemony, a “crisis of capitalism,” and Russia’s leadership of a global anti-Western coalition. Mr. Garbuzov is correct in pointing out that none of these three things is actually happening, but the ultras don't want to hear it. The ISW describes their criticisms as "largely coherent," and notes that some of the influential milbloggers see la trahison des clercs: they think the Telegram social media platform is doing the work the academicians should have taken up, but have shirked. Mr. Garbuzov was subsequently fired from his directorship.
Pay-to-play in the Russian milblogger space.
The Institute for the Study of War reported Saturday, "Prominent Russian milbloggers likely have a monetary incentive to regularly report information about the war in Ukraine that is uncritical of Russian authorities. BBC reported on September 1 that prominent Russian milbloggers claimed that they can make between about 48,000 and 188,000 rubles (about $500 to 1,950) per advertisement on their Telegram channels. BBC reported that an advertising agent working with Wagner-affiliated channels claimed that a prominent Wagner Group-affiliated source made around 31,500 rubles (about $330) per advertisement. The advertising agent told BBC that several employees of RIA FAN, a now-shuttered media outlet affiliated with former Wagner financier Yevgeny Prigozhin, received only about 10,500 to 21,800 rubles (about $108 to $226) per advertisement due to their lower subscriber count. BBC noted that Russia’s average monthly salary is about 66,000 rubles (about $685). Prominent milbloggers’ monthly salaries are thus likely much higher than the Russian average. Russian milbloggers are likely economically incentivized to maintain and grow audiences through war reporting that is uncritical of Russian authorities, as criticism of the Russian authorities, resistance to attempted censorship, and potential legal problems could lead to a decrease in advertisements, although milbloggers who present themselves as telling unpleasant truths can also gain large followings. Alexander “Sasha” Kots, a prominent milblogger who also serves on the Kremlin’s Human Rights Council, claimed that milbloggers have a “direct channel to privately communicate information” to the Russian MoD."
Ukraine's and Russia's presidents have a sharply different view of the state of the war.
Saturday morning President Zelenskiy tweeted, optimistically, "Ukrainian forces are moving forward. Despite everything and no matter what anyone says, we are advancing, and that is the most important thing. We are on the move."
For his part, President Putin said that Ukraine is being pushed back on all fronts, which in fairness is something no one else sees. He added that he now understands why Russia won the Great Patriotic War, and that Russia "has been, and remains, invincible." He also said that Russia planned "over the next two and a half years," to make a major investment of almost two-trillion rubles in the development of conquered Ukrainian territories to "bring them up to the all-Russia level" in such social matters as medicine, education, and infrastructure. Two-trillion rubles, at current exchange rates, is roughly equivalent to a bit less than 21 billion US dollars.
The EU finds Big Tech soft on Russian disinformation.
The European Commission released a study last week, "Digital Services Act: Application of the Risk Management Framework to Russian disinformation campaigns," in which it found that major tech companies' efforts to control disinformation were falling short of their aspirations, and that Russian disinformation concerning Russia's war against Ukraine had in fact increased on many widely used platforms. The BBC reports that X, the platform formerly known as Twitter, seems to have been notably irregular in its attempts to control disinformation. The Kyiv Post notes that, "The authors warned the 'reach and influence of Kremlin-backed accounts has grown further in the first half of 2023, driven in particular by the dismantling of Twitter's safety standards'," but also points out that X isn't alone in this regard, and that the platform has said it's working to do better.
Early in its report the Commission offered a characterization of Russia's hybrid war that's unambiguous and justifiably hostile. "On 24 February 2022, Russia attacked all of Ukraine, eight years after Russian troops entered Crimea and Ukraine’s Donbas regions. Russia’s military strategy has since not only resulted in harrowing violence in Ukraine—it also extended to online spaces, enabling acts of information warfare far beyond Ukraine’s borders. Kremlin operatives have deliberately manipulated the features of social media platforms to spread disinformation and influence public opinion."
The disinformation extended to domestic and international audiences. "Both inside and outside Russia, the Kremlin’s disinformation strategy followed two tactical objectives: suppressing the truth about the war and amplifying lies about an alleged 'special operation' to free Ukraine from 'Nazism'. Inside Russia, the Kremlin moved swiftly to block social media platforms such as Facebook or Twitter and to tighten media censorship in order to cut Russians off from images of the horror their country was inflicting on Ukrainians. At the same time, the Kremlin leveraged its ecosystem of state-controlled media to flood the remaining platforms in Russia with lies and self-serving conspiracies."
Externally directed disinformation was more complex and more carefully constructed. "Outside Russia, the Kremlin’s disinformation strategy followed the same objectives, but it was more subtle. Of course, the Kremlin could not censor the free media of other countries, or block Facebook across the continent to isolate Europeans from the truth. Instead, the Kremlin and its proxies captured growing audiences with highly produced propaganda content, and steered users to unregulated online spaces, where democratic norms have eroded and hate and lies could be spread with impunity."
The report sees this approach as an old one, dating back at least to the early years of the Cold War. "This is an old playbook: The Kremlin has attempted to manipulate foreign communication systems and public opinion long before the rise of Facebook and Google. The so-called information warfare doctrine goes back to early Soviet times – it builds on 'reflexive control.' The idea is to shape how adversaries think about an issue, while concealing the activities of manipulation so that the targets remain unaware. Since the 1950s, the Soviet security agency (KGB) hosted a department dedicated to spreading disinformation in other countries, including antisemitic, racist narratives designed to deepen socio-political divides."
Tech companies recognized that such a playbook was likely to reappear, and they took steps to comply with European Commission guidance on the control of disinformation. "Notably, in June 2022 all major platforms except Telegram" (Telegram, while based in the United Arab Emirates, has a large base of Russian users) "signed a strengthened Code of Practice on Disinformation based on the European Commission’s guidance. In theory, the requirements of this voluntary Code were applied during the second half of 2022 – during our period of study. Companies published the results of compliance efforts in January 2023. This Code includes some commitments analogous to the mitigation requirements codified in Article 35 of the DSA and thus has clear relevance to this analysis. In particular, the Code has measures that (if enforced) would effectively curtail specific high-risk content. However, it was not designed to address a systemic information warfare perpetrated by state-backed actors across platforms that includes tactics far beyond the spread of disinformation."
The Commission's report presents the control of disinformation as an exercise in risk management. "This method of protecting the public from harm while simultaneously protecting freedom of expression calls for evaluation of the probability of real world harms." Its Framework therefore includes risk assessment metrics and a set of mitigation measures. Disinformation, at one level of abstraction, is marketing, and the risk criteria resemble marketing metrics:
- Audience size,
- Amplification factor (roughly, a measure of "virality"),
- Re-channeling, and
- Toxicity ("the level of unmoderated harmful interactions on a platform").
Actually assessing the effectiveness of disinformation is more difficult. In marketing, the ultimately important measures are sales and market share, and there are no exact analogues of these among the criteria. (Public opinion research might offer an approach to such an analogue.)
Having gauged the risk, platforms would then apply appropriate mitigations to reduce the probable effect of disinformation. Mitigations themselves are assessed against nine metrics:
- Speed and consistency of removal:
- "The platforms proactively identify and remove vast quantities of violative material; here the relevant metrics are the time between posting and removal, and the exposure and engagement with the content before removal.
- "Content moderation seeks to enforce platform policies, without unduly restricting freedom of speech. False positive and false negative rates will help determine that the balance has been appropriately struck.
- "Users and regulators will expect linguistic equity—that platforms provide the same level of service (e.g. content moderation) across languages and countries."
- Deamplification (that is, down-ranking by platform algorithms, recommendation removals, searchability limitation, and demonetization).
- Non-Follower Engagement (which measures, in part, the success of deamplification).
- Consistency of labelling (as, for example, state media, or fact-checked falsehood).
- Responsiveness to user notifications.
- Redress of denial of service.
- Restrictions on Inauthentic Behaviour.
- Restrictions on Algorithmic Exploitation.
- Denylisting URLs.
Despite some successes, the report finds "that the mitigation measures applied by the platforms were largely ineffective," and that the platforms weren't ready to cope with information warfare.
Russian television service introduced into the occupied Donetsk.
This morning the UK's Ministry of Defence described Russian television's introduction into Donetsk, and its replacement of other alternative sources of news. "Residents of the Russian-controlled area of Donetsk Oblast in Ukraine are now receiving Russian-language local news bulletins from one of Russia’s major broadcast organisations. On 4 September, the All-Russia State Television and Radio Broadcasting Company (VGTRK) opened a Donetsk franchise and commenced broadcasting in the internationally unrecognised Donetsk People’s Republic (DPR). Local news bulletins are provided by Russia’s Rossiya 1TV Channel and present the Russian view of the war. This is part of Russia’s broader effort to assert enduring control of the area. Ukraine-based Russian language television and radio stations were freely available in the now-annexed areas before 2014. After the invasion, pan-Ukraine providers continued to provide locally sourced Russian-language content. DPR-government-controlled and aligned broadcasters also rebroadcast Russian national news programming as part of a propaganda campaign but did not provide regional bulletins. Broadcasting VGTRK in Donetsk has taken over a year to achieve, having first been announced in 2022. This was almost certainly due to the refusal to work of trained local technicians. Those sympathetic to the DPR and with the required skills have now likely been brought in from Crimea, Luhansk and elsewhere. Although blocked over the airwaves, Ukrainian broadcasting is still accessible to a wide audience via the internet. Where Russian filtering restrictions are in force, audiences use VPN or other active circumvention technologies. Mobile phones linked to Ukrainian providers are highly likely unfettered."