KHUSHBU SHAH: What would you both be looking for, as well? J.C., I’ll start with you.

JUAN CARLOS LARA: Yeah, as part of the [FOC advisory network], of course, there might be some idea of what’s coming in when we speak about principles for governments for the use of surveillance capabilities.

However, there are two things that I think are very important to consider for this type of issue: first of all is that which principles and which rules are adopted by the states. I mean, it’s a very good—it’s very good news that we have this executive order as a first step towards thinking how states refrain from using surveillance technology disproportionately or indiscriminately. That’s a good sign in general. That’s a very good first step. But secondly, within this same idea, we would expect other countries to follow suit and hopefully to expand the idea of bans on spyware or bans on surveillance technology that by itself may pose grave risks to human rights, and not just in the case of this, or that, or the fact that it’s commercial spyware, which is a very important threat including for countries in Latin America who are regular customers for certain spyware producers and vendors.

But separately from that, I think it’s very important to also understand how this ties into the purposes of the Freedom Online Coalition and its principles, and how to have further principles that hopefully pick up on the learnings that we have had for several years of discussion on the deployment of surveillance technologies, especially by academia and civil society. If those are picked up by the governments themselves as principle, we expect that to exist in practice.

One of the key parts of the discussion on commercial spyware is that I can easily think of a couple of Latin American countries that are regular customers. And one of them is an FOC member. That’s very problematic, when we speak about whether they are abiding by these principles and by human rights obligations or not, and therefore whether these principles will generate any kinds of restraint in the use and the procurement of such surveillance tools.

KHUSHBU SHAH: So I want to follow up on that. Do you think that there—what are the dangers and gaps of having this conversation without proposing privacy legislation? I want to ask both of our—

JUAN CARLOS LARA: Oh, very briefly. Of course, enforcement and the fact that rules may not have the institutional framework to operate I think is a key challenge. That is also tied to capacities, like having people with enough knowledge and have enough, of course, exchange of information between governments. And resources. I think it’s very important that governments are also able to enact the laws that they put in the books, that they are able to enforce them, but also to train every operator, every official that might be in contact with any of these issues. So that kind of principle may not just be adopted as a common practice, but also in the enforcement of the law, so get into the books. Among other things, I think capacities and resources are, like—and collaboration—are key for those things.

KHUSHBU SHAH: Alissa, as our industry expert, I’d like to ask you that same question.

ALISSA STARZAK: You know, I think one of the interesting things about the commercial spyware example is that there is a—there is a government aspect on sort of restricting other people from doing certain things, and then there is one that is a restriction on themselves. And so I think that’s what the executive order is trying to tackle. And I think that the restricting others piece, and sort of building agreement between governments that this is the appropriate thing to do, is—it’s clearly with the objective here, right?

So, no, it’s not that every government does this. I think that there’s a reality of surveillance foreign or domestic, depending on what it looks like. But thinking about building rulesets of when it’s not OK, because I think there is—there can be agreement if we work together on what that ruleset looks like. So we—again, this is the—we have to sort of strive for a better set of rules across the board on when we use certain technologies. And I think—clearly, I think what we’ve heard, the executive order, it’s the first step in that process. Let’s build something bigger than ourselves. Let’s build something that we can work across governments for. And I think that’s a really important first step.

ADEBOYE ADEGOKE: OK. Yeah, so—yeah, so, I think, yeah, the executive order, it’s a good thing. Because I was, you know, thinking to myself, you know, looking back to many years ago when in my—in our work when we started to engage our government regarding the issue of surveillance and, you know, human rights implications and all of that, I recall very vividly a minister at the time—a government minister at the time saying that even the US government is doing it. Why are you telling us not to do it? So I think it’s very important.

Leadership is very key. The founding members of the FOC, if you look FOC, the principles and all of that, those tests are beautiful. Those tests are great. But then there has to be a demonstration of—you know, of application of those tests even by the governments leading, you know, the FOC so that it makes the work of people like us easier, to say these are the best examples around and you don’t get the kind of feedback you get many years ago; like, oh, even the US government is doing it. So I think the executive order is a very good place to start from, to say, OK, so this is what the US government is doing right now and this is how it wants to define their engagement with spyware.

But, of course, like, you know, he said, it has to be, you know, expanded beyond just, you know, concerns around spyware. It has to be expanded to different ways in which advanced technology [is] applied in government. I come from a country that has had to deal with the issues of, you know, terrorism very significantly in the past ten years, thereabout, and so every justification you need for surveillance tech is just on the table. So whenever you want to have the human rights conversation, somebody’s telling you that, you want terrorists to kill all of us? You know? So it’s very important to have some sort of guiding principle.

Yeah, we understand [the] importance of surveillance to security challenges. We understand how it can be deployed for good uses. But we also understand that there are risks to human-rights defenders, to journalists, you know, to people who hold [governments] accountable. And those have to be factored into how these technologies are deployed.

And in terms of, you know, peculiar issues that we have to face, basically you are dealing with issues around oversight. You are dealing with issues around transparency. You are dealing with issues around [a] lack of privacy frameworks, et cetera. So you see African governments, you know, acquiring similar technologies, trying, you know, in the—I don’t want to say in the guise, because there are actually real problems where those technologies might be justified. But then, because of the lack of these principles, these issues around transparency, oversight, legal oversight, human-rights considerations, it then becomes problematic, because this too then become—it’s true that it is used against human-rights defenders. It’s true that it is used against opposition political parties. It’s true that it is used against activists and dissidents in the society.

So it’s very important to say that we look at the principle that has been developed by the FOC, but we want to see FOC government demonstrate leadership in terms of how they apply those principles to the reality. It makes our work easier if that happens, to use that as an example, you know, to engage our government in terms of how this is—how it is done. And I think these examples help a lot. It makes the work very easy—I mean, much easier; not very easy.

KHUSHBU SHAH: Well, you mentioned a good example; so the US. So you reminded me of the biometric data that countries share in Central and North America as they monitor refugees, asylum seekers, migrants. Even the US partakes. And so, you know, what can democracies do to address the issue when they’re sometimes the ones leveraging these same tools? Obviously, it’s not the same as commercial spyware, but—so what are the boundaries of surveillance and appropriate behavior of governments?

J.C., can I throw that question to you?

JUAN CARLOS LARA: Happy to. And we saw a statement by several civil-society organizations on the use of biometric data with [regard] to migrants. And I think it’s very important that we address that as a problem.

I really appreciated that Boye mentioned, like, countries leading by example, because that’s something that we are often expecting from countries that commit themselves to high-level principles and that sign on to human-rights instruments, that sign declarations by the Human Rights Council and the General Assembly of the [United Nations] or some regional forums, including to the point of signing on to FOC principles.

I think that it’s very problematic that things like biometric data are being used—are being collected from people that are in situations of vulnerability, as is the case of very—many migrants and many people that are fleeing from situations of extreme poverty and violence. And I think it’s very problematic also that also leads to [the] exchange of information between governments without proper legal safeguards that prevent that data from falling into the hands of the wrong people, or even that prevent that data from being collected from people that are not consenting to it or without legal authorization.

I think it’s very problematic that countries are allowing themselves to do that under the idea that this is an emergency situation without proper care for the human rights of the people who are suffering from that emergency and that situations of migrations are being treated like something that must be stopped or contained or controlled in some way, rather than addressing the underlying issues or rather than also trying to promote forms of addressing the problems that come with it without violating human rights or without infringing upon their own commitments to human dignity and to human privacy and to the freedom of movement of people.

I think it’s—that it’s part of observing legal frameworks and refraining from collecting data that they are not allowed to, but also to obeying their own human-rights commitments. And that often leads to refraining from taking certain action. And in that regard, I think the discussions that there might be on any kind of emergency still needs to take a few steps back and see what countries are supposed to do and what obligations they are supposed to abide [by] because of their previous commitments.

KHUSHBU SHAH: So thinking about what you’ve just said—and I’m going to take a step back. Alissa, I’m going to ask you kind of a difficult question. We’ve been talking about specific examples of human rights and what it means to have online rights in the digital world. So what does it mean in 2023? As we’re talking about all of this, all these issues around the world, what does it mean to have freedom online and rights in the digital world?

ALISSA STARZAK: Oh, easy question. It’s really easy. Don’t worry; we’ve got that. Freedom Online’s got it; you’ve just got to come to their meetings.

No, I think—I think it’s a really hard question, right? I think that we have—you know, we’ve built something that is big. We’ve built something where we have sort of expectations about access to information, about the free flow of information across borders. And I think that, you know, what we’re looking at now is finding ways to maintain it in a world where we see the problems that sometimes come with it.

So when I look at the—at the what does it mean to have rights online, we want to—we want to have that thing that we aspire to, I think that Deputy Secretary Sherman mentioned, the sort of idea that the internet builds prosperity, that the access to the free flow of information is a good thing that’s good for the economy and good for the people. But then we have to figure out how we build the set of controls that go along with it that are—that protect people, and I think that’s where the rule of law does come into play.

So thinking about how we build standards that are respect—that respect human rights in the—when we’re collecting all of the information of what’s happening online, right, like, maybe we shouldn’t be collecting all of that information. Maybe we should be thinking of other ways of addressing the concerns. Maybe we should be building [a] framework that countries can use that are not us, right, or that people at least don’t point to the things that a country does and say, well, if they can do this, I can do this, right, using it for very different purposes.

And I think—I think that’s the kind of thing that we’re moving—we want to move towards, but that doesn’t really answer the underlying question is the problem, right? So what are the rights online? We want as many rights as possible online while protecting security and safety, which is, you know, also—they’re also individual rights. And it’s always a balance.

KHUSHBU SHAH: It seems like what you’re touching on—J.C., would you like to—

JUAN CARLOS LARA: No. Believe me.

KHUSHBU SHAH: Well, it seems like what you’re talking about—and we’re touching—we’ve, like, talked around this—is, like, there’s a—there’s a sense of impunity, right, when you’re on—like in the virtual world, and that has led to what we’ve talked about for the last forty minutes, right, misinformation/disinformation. And if you think about what we’ve all been talking about for the last few weeks, which is AI—and I know there have been some moments of levity. I was thinking about—I was telling Alissa about how there was an image of the pope wearing a white puffer jacket that’s been being shown around the internets, and I think someone pointed out that it was fake, that it was AI-generated. And so that’s one example. Maybe it’s kind of a fun example, but it’s also a little bit alarming.

And I think about the conversation we’re having, and what I really want to ask all of you is, so, how might these tools—like the AI, the issue of AI—further help or hurt [human rights] activists and democracies as we’re going into uncharted territories, as we’re seeing sort of the impact of it in real time as this conversation around it evolves and how it’s utilized by journalists, by activists, by politicians, by academics? And what should the FOC do—I know I’m asking you again—what can the FOC do? What should we aim for to set the online world on the right path for this uncharted territory? I don’t know who wants to start and attempt.

ADEBOYE ADEGOKE: OK, I’ll start. Yeah.

So I think it’s great that, you know, the FOC has, you know, different task [forces] working on different thematic issues, and I know there is a task force on the issue of artificial intelligence and human rights. So I think for me that’s a starting point, you know, providing core leadership on how emerging technology generally impacts… human rights. I think that’s the starting point in terms of what we need to do because, like the deputy secretary said, you know, technology’s moving at such a pace that we can barely catch up on it. So we cannot—we cannot afford to wait one minute, one second before we start to work on this issue and begin to, you know, investigate the human rights implications of all of those issues. So it’s great that the FOC’s doing that work.

I would just say that it’s very important for—and I think this [speaks] generally to the capacities of the FOC. I think the FOC needs to be further capacitated so that this work can be made to bear in real-life issues, in regional, in national engagement so that some of the hard work that has been put into those processes can really reflect in real, you know, national and regional processes.

ALISSA STARZAK: Yeah. So I definitely agree with that.

I think—I think on all of these issues I think we have a reality of trying to figure out what governments do and then what private companies do, or what sort of happens in industry, and sometimes those are in two different lanes. But in some ways figuring out what governments are allowed to do, so thinking about the sort of negative potential uses of AI may be a good start for thinking about what shouldn’t happen generally. Because if you can set a set of norms, if you can start with a set of norms about what acceptable behavior looks like and where you’re trying to go to, you’re at least moving in the direction of the world that you think you want together, right?

So understanding that you shouldn’t be generating it for the purpose of misinformation or, you know, that—for a variety of other things, at least gets you started. It’s a long—it’s going to be a long road, a long, complicated road. But I think there’s some things that can be done there in the FOC context.

JUAN CARLOS LARA: Yes. And I have to agree with both of you. Specifically, because the idea that we have a Freedom Online Coalition to set standards, or to set principles, and a taskforce that can devote some resources, some time, and discussion to that, can also identify where this is actually the part of the promise and which is the part of the peril. And how governments are going to react in a way that promotes prosperity, that promotes interactivity, and promotes commerce—exercise of human rights, the rights of individuals and groups—and which sides of it become problematic from the side of the use of AI tools, for instance, for detecting certain speech for censorship or for identifying people in the public sphere, because they’re working out on the streets, or to collect and process people without consent.

I think because that type of expertise and that type of high political debate can be held at the FOC, that can promote the type of norms that we need in order to understand, like, what’s the role of governments in order to steer this somewhere. Or whether they should refrain from doing certain actions that might—with the good intention of preventing the spread of AI-generated misinformation or disinformation—that may end up stopping these important tools to be used creatively or to be used in constructive ways, or in ways that can allow more people to be active participants of the digital economy.

KHUSHBU SHAH: Thank you. Well, I want to thank all three of you for this robust conversation around the FOC and the work that it’s engaging in. I want to thank Deputy Secretary Sherman and our host here at the Atlantic Council for this excellent conversation. And so if you’re interested in learning more about the FOC, there’s a great primer on it on the DFRLab website. I recommend you check it out. I read it. It’s excellent. It’s at the bottom of the DFRLab’s registration page for this event.