What Does Information Security Mean in a Democracy?

My understanding is that authoritarian regimes take information security very seriously. To these regimes, information security involves controlling information that can be a threat to the regime–e.g., information that reveals corruption or violation of human rights. But democratic countries should also care about information security, too–albeit using a different definition. In democratic societies, I think information security should involve protecting public discourse from malicious information warfare, while ensuring that accurate information plays a central role in the discourse. Who will be doing the “protecting?” In my view, the independent press will play a key role in this, and possibility academics and think-tanks. Here are some quick thoughts about this:

1. We need to have a national discussion, reaffirming the importance of fact-based opinions that rely on sound arguments. Part of the discussion should involve defining and clarifying these concepts. We should also have a discussion about what constitutes poor arguments and the reasons conspiracy theories are dangerous;

2. We should think of the free press as vital to our national security. Our democracy depends on a strong, independent free press–one that operates with high journalistic standards and has a steady flow of resources. Ideally, I’d want to find a way to devote resources that aren’t so commercial in nature.

3. We may need to think of creating different institutions, tools, or methods to help insure information security.

Edit

4. Idea: Offline project. Create news parties or discussion groups, targeting people on social media. Organizers would discuss information warfare, current false stories, etc. This offline projects can make individual citizens more resilient to information warfare.

5. Create scorecards for news agencies and journalists–that are posted on the internet and follow the avatars/accounts of both on social media. Scorecard will reflect how closely they live up on journalistic standards. Agencies or journalists that have a history of blatant propaganda and lies can also receive such a designation as well; there can also labels that reflect various degrees of this.

6. Scorecards could be made up by a professional journalism organization, like a bar association or professional guild. The organization should be made up of a diverse group–politically, socially, economically, ethnically, etc. (Various nations and politicians can be scores as well.)

Both #5 and #6 are ways to creating filters with credibility. The institutions and individuals that have a history of good, reliable journalism deserve more trust (not blind trust), and the scorecards are a way for individuals to identify these groups and indivdiuals.

9 thoughts on “What Does Information Security Mean in a Democracy?

  1. Monetary Incentives That Decreases Information Security

    If this is correct, this feature of youtube dovetails nicely with the objectives of any actor that wants to manipulate and poison our information space. This isn’t just about the health of our democracy, but a national security issue. Or, to put in more accurately, they’re both the same thing.

  2. I’m not sure if this is the best place for the following thread, but it’s close enough:

    Edit

    Some passages from the article:

    Dr Kogan – who later changed his name to Dr Spectre, but has subsequently changed it back to Dr Kogan – is still a faculty member at Cambridge University, a senior research associate. But what his fellow academics didn’t know until Kogan revealed it in emails to the Observer (although Cambridge University says that Kogan told the head of the psychology department), is that he is also an associate professor at St Petersburg University. Further research revealed that he’s received grants from the Russian government to research “Stress, health and psychological wellbeing in social networks”.

    Kogan collected a lot of data from Facebook:

    What the email correspondence between Cambridge Analytica employees and Kogan shows is that Kogan had collected millions of profiles in a matter of weeks. But neither Wylie nor anyone else at Cambridge Analytica had checked that it was legal. It certainly wasn’t authorised. Kogan did have permission to pull Facebook data, but for academic purposes only. What’s more, under British data protection laws, it’s illegal for personal data to be sold to a third party without consent.

    “Facebook could see it was happening,” says Wylie. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use. So they were like, ‘Fine’.”

    Kogan maintains that everything he did was legal and he had a “close working relationship” with Facebook, which had granted him permission for his apps.

    here are other dramatic documents in Wylie’s stash, including a pitch made by Cambridge Analytica to Lukoil, Russia’s second biggest oil producer. In an email dated 17 July 2014, about the US presidential primaries, Nix wrote to Wylie: “We have been asked to write a memo to Lukoil (the Russian oil and gas company) to explain to them how our services are going to apply to the petroleum business. Nix said that “they understand behavioural microtargeting in the context of elections” but that they were “failing to make the connection between voters and their consumers”. The work, he said, would be “shared with the CEO of the business”, a former Soviet oil minister and associate of Putin, Vagit Alekperov.

    “It didn’t make any sense to me,” says Wylie. “I didn’t understand either the email or the pitch presentation we did. Why would a Russian oil company want to target information on American voters?”

    Mueller’s investigation traces the first stages of the Russian operation to disrupt the 2016 US election back to 2014, when the Russian state made what appears to be its first concerted efforts to harness the power of America’s social media platforms, including Facebook. And it was in late summer of the same year that Cambridge Analytica presented the Russian oil company with an outline of its datasets, capabilities and methodology. The presentation had little to do with “consumers”. Instead, documents show it focused on election disruption techniques. The first slide illustrates how a “rumour campaign” spread fear in the 2007 Nigerian election – in which the company worked – by spreading the idea that the “election would be rigged”. The final slide, branded with Lukoil’s logo and that of SCL Group and SCL Elections, headlines its “deliverables”: “psychographic messaging”.

    and

    Russia, Facebook, Trump, Mercer, Bannon, Brexit. Every one of these threads runs through Cambridge Analytica. Even in the past few weeks, it seems as if the understanding of Facebook’s role has broadened and deepened. The Mueller indictments were part of that, but Paul-Olivier Dehaye – a data expert and academic based in Switzerland, who published some of the first research into Cambridge Analytica’s processes – says it’s become increasingly apparent that Facebook is “abusive by design”. If there is evidence of collusion between the Trump campaign and Russia, it will be in the platform’s data flows, he says. And Wylie’s revelations only move it on again.

    More about Kogan

    One thing that stood out in the video. Nix talks about ways CA hide the fact that they are involved in an election, citing posing as “students doing research projects attached to a university.” I have to check, but I believe Alex Kogan, one of researchers(?) at CA, has some grant or connection to St. Petersburg University (in Russia).

    Interesting points about whether Cambridge Analytic’s techniques/methods are bunk and why that’s not really relevant:

  3. I think separating acceptable from unacceptable practices of campaigning is important, because not all of the techniques that use social media and data are inappropriate. For example, I don’t think a lot of what is mentioned below is out of bounds:

    Trolling-as-a-service, one stop shops for analyzing, engaging, and mobilizing online audiences, will be a method all political campaigns seek to exploit in the future. Just think about what a political campaign needs: a deep insight into voter demographics and desires, voter preferences and dislikes; key emotional vulnerabilities constituents didn’t even know they possess; the ability to efficiently spend advertising on specific voting blocs; the rapid creation of engaged audiences for a newly formed campaign; and the ability to destroy a political foe with information attacks while hiding the hand of the attacker.

    These activities become questionable and maybe worse, when individuals don’t know or don’t consent for their data to be used, and when a campaign attacks an opponent using false information or bad faith arguments. This last point brings up an important differentiation–namely, campaign practices that are distasteful and even harmful to a democracy, but don’t necessarily pose a serious threat to democracy or national security. For example, a political campaign will always spin information to frame opponents in the most negative light, using an approach that is distorting and not always intellectually honest. Campaigns will appeal to emotions, like fear, in a quasi-demagogic or racist ways, to win elections as well. These are less than ideal tactics for a democracy, but employing them doesn’t necessary pose a threat to the entire democracy or even national security–at least not in the past.

    Now that we have adversaries that seek to erode liberal democracies, by encouraging mistrust in democratic institutions and leaders and by poisoning the information space, we might to re-think all of the tactics above.

  4. Why Information Vulnerability is an Issue

    I’ve talked about this many times, but I want to briefly mention it here. The following short video made me think of this issue:

    Why are we struggling to agree upon relevant facts (at least in poiltical discourse)? To me, the primary answer for this is the internet. Here are the steps leading to where we are now:

    1. Internet increases flow and sources of information. That is we have more voices besides the traditional gatekeepers. This weakens the authority of traditional gatekeepers.

    2. The internet dries up the revenue stream for print journalism, weakening a key gatekeeper or information filter.

    The next point is something I may not have discussed much. How do individuals in a nation find consensus over the key facts of an issue? Off the top of my head, I’d say the people rely on the authority of gatekeepers. The two that come to mind are the news media (including periodicals) and political parties. If these institutions lose their authority, then agreement upon the relevant facts starts to erode.

    An important point here is that agreement doesn’t derive largely from large numbers of individuals thoughtfully and critically identifying the relevant facts. In other words, if we increased the number of critical thinkers in the society, I don’t think that would have as much impact as if gatekeeping institutions had more authority and trust.

    If this is correct, strengthening old gatekeeping institutions and/or developing new institutions or processes will be a crucial way of providing information security.

  5. If this were completely true, I don’t think you would keep sharing tweets by people you seem to find credible. While many of these people established cred through traditional gatekeepers, you’re relying on that cred even in the absence of gatekeeping for these specific items. So there is a way for people to be credible without an institutional gatekeeper, unless you’re calling these people gatekeepers themselves. But I don’t think you are, because that’s what other people do, which is leading to all this confusion.

    1. If this were completely true, I don’t think you would keep sharing tweets by people you seem to find credible.

      They’re gatekeepers for me, but not for people of all political persuasions. I didn’t make that clear. Of course, every individual can–and does–choose an individual, institution, or entity to fulfill a gatekeeping role, but we need ones that large numbers of people, left, right, and center, see as authoritative and trustworthy. Or am I misunderstanding what you said?

  6. No, that makes sense. So it’s okay to have personal sources of credible facts, as long as there are some generally agreed-on institutional sources as well? This still sounds like what’s causing the problem you propose.

  7. No, that makes sense. So it’s okay to have personal sources of credible facts, as long as there are some generally agreed-on institutional sources as well? This still sounds like what’s causing the problem you propose.

    Well if those personal sources of information have equal or greater authority and legitimacy than those institutional sources–sources that appeal to a big, diverse group–then yes, that would be a problem. We need sources that large, diverse group of citizens trust and view as authoritative.

  8. I read the thread, but not the entire article, yet. But I wanted to post a link here before I forget. Also, I wanted to write a thought that came to mind (which I probably mentioned before): The current information landscape is the Wild Wild West. I guess libertarians like Mitchell or even anarchists like this. But there’s a dark side. In my view, we need rules, norms and enforcement, otherwise the people with bad intentions will do a lot of damage.

Leave a Reply

Your email address will not be published. Required fields are marked *