Caveat 6.8.23
Ep 175 | 6.8.23

GDPR turns 5.

Transcript

Larry Whiteside Jr.: I think GDPR is that sort of privacy panacea that is beginning to put the world on notice as it relates to hey, people are starting to really care about their data and what companies are doing with it.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner and joining me is my co-host Ben Yelin from the University of Maryland's Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben talks about an appeals court decision that could have major ramifications for cybersecurity firms. I've got the story of a judge who is none too happy with ChatGPT in his courtroom. And later in the show, my conversation with Larry Whiteside Jr., CISO of RegScale, discussing GDPR on the occasion of its fifth university. While the show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.

Alright, Ben, we've got some good stories to share this week. You want to start things off for us here?

Ben Yelin: Sure, so my story comes from the Cybersecurity 202 over at the Washington Post. The article by Tim Starks is entitled "This 'zombie case' could have big ramifications for cybersecurity firms." The reason it's a zombie case is that it's been going on forever. This lawsuit was initiated in 2017.

Dave Bittner: Wow.

Ben Yelin: And much to my dissatisfaction, there's so many procedural issues here that are so incredibly dry and boring. But those procedural matters may end up delaying a decision on the merits even longer, which is too bad because I think the cybersecurity issues present here are extremely significant.

Dave Bittner: Okay.

Ben Yelin: So the case is Enigma v. Malwarebytes. They are both cybersecurity firms.

Dave Bittner: Yeah.

Ben Yelin: And Enigma was, in 2017, through this lawsuit, accusing Malwarebytes of labeling Enigma software malicious, quote, "threats," and quote, "potentially unwanted programs." So, the cause of action here that Enigma is alleging is that this violates a federal statute against false advertising. In order for that statute to be violated, one of the elements of that statute is the statement in question has to be of a factual nature.

Dave Bittner: Let me pause here for just one second just to clarify. Because Enigma is a software provider. It's worth mentioning that Malwarebytes is, they provide software that protects your computer from malware. So, if they're labeling Enigma's software as bad, that could stop it from running theoretically on someone's system who had Malwarebytes installed. Right?

Ben Yelin: Right.

Dave Bittner: Okay.

Ben Yelin: It certainly has major financial implications for Enigma, which is why they're suing here.

Dave Bittner: Right.

Ben Yelin: And not to get too into the weeds here, it seems like these two firms have been going at it for a period of years. There was some third party firm that was involved in this litigation. Like I said, the issues in this case are unfortunately rather endless. And kind of distract us from focusing on what I want to focus on here.

Dave Bittner: Okay.

Ben Yelin: But anyway this law here is called the Lanham Act. And it, like I said, prohibits false commercial speech. In order for this to be a violation of the law, the speech has to be factual in nature. The first minute protects opinion. I think that's a well established legal principle. You could really express an opinion about anything. This is America. If I wanted to say that the CyberWire or the Caveat podcast sucks, that's well within my right. It wouldn't expose me to any legal liability.

Dave Bittner: I can say this McDonalds hamburger is the worst hamburger I've ever had and McDonalds would just have to take that.

Ben Yelin: They just have to take that, yeah. Even if it costs them money, given your vast influence over potential McDonalds customers, they just have to accept that. But when it comes to facts, our government has decided that we have a public policy interest in making sure that there aren't falsehoods in commercial advertising. So, this suit originally came up in a district court located in Northern California. Many of these cases come through the Federal District Court in Northern California because that's where Silicon Valley is located. And the District Court said that this was not actionable under this false advertising clause. Each was merely opinion. There wasn't any factual basis. This content, the alleged false speech was un-falsifiable. And it went in front of a three judge panel on appeal. The Ninth Circuit Court of Appeals. And in a 2-1 majority of this three judge panel, the court says that this is a factual matter. These statements are falsifiable and therefore the litigation can continue. The judge who wrote the majority and the opinion in this case basically is saying we should cede our claim to expertise over to cybersecurity firms. They are very well-versed in cybersecurity and the subject matter. And if they determine that something is a potential risk, that it's malware, it shouldn't be the court's business to question that as a factual statement. That the court should stay out of that matter. And at least for the purposes of determining whether the litigation is allowed to continue, that satisfies the standard. You're supposed to sort of assume facts and evidence of that initial preliminary stage. Assume the facts are favorable to the moving litigant.

Dave Bittner: Okay.

Ben Yelin: And that seems to be what's happening here.

Dave Bittner: Okay.

Ben Yelin: The defense, which was written by -- or I'm sorry, the dissent, which was written by a judge named Patrick Bumatay, wrote that the court and the majority opinion sends a chilling message to cybersecurity companies that civil liability may now attach if a court later disagrees with your classification of a program as malware. But we have neither the authority nor the competence to abrogate ourselves regulatory oversight over cybersecurity. This is what's interesting about this case and why it's particularly relevant. If the reasoning of the majority here is to hold, then every time one cybersecurity firm tries to identify a threat from any type of software, any type of network, any type of competitor in the field, they could be subjecting themselves to future litigation under the Lanham Act. And that could be a major disincentive for identifying something as malware or as a threat or as a potentially unwanted program. So it would have a major chilling effect on cybersecurity firms who are merely trying to inform their customers that such and such application or such and such service contains these malicious threats.

Dave Bittner: Okay.

Ben Yelin: I think that's certainly a valid policy concern and it's why I'm much more sympathetic to the dissent here. I also think that when you get into the nitty gritty of the law here, I really struggle as to whether these charges that are laid out that Enigma software is malicious, that it contains threats, and that it contains potentially unwanted programs, I'm really unsure as to whether those three things are actually falsifiable. There are instances where something could or could not be labeled malware depending on how it was defined and the same holds true for potentially unwanted programs of sort of a nebulous definition. That gets into unauthorized access, which oftentimes could have subjective evaluations to it. So for those two reasons, I think this is a, the dissent is certainly more compelling to me here. And I think the majority's decision is just frankly wrong. And if it were allowed to stand, it could have really negative implications on cybersecurity writ large, because it would disincentivize companies from defying threats that might end up helping our national cybersecurity posture.

Dave Bittner: So what do we know about the original lawsuit here? What was it that these two groups had their tantrum about with each other?

Ben Yelin: Sure, so what we know about Enigma is it is a Florida company. It markets and sells computer security software nationwide. Its products, according to the complaint here, detect and remove malicious software, malware, such as viruses, spyware, adware, ransomware, and trojans, enhance user's internet privacy and offers users the choice to block potentially unwanted programs and/or eliminate security threats and risks from problematic software programs.

Dave Bittner: Okay.

Ben Yelin: Malwarebytes is a competitor. It's a Delaware corporation. As are many corporations, although it's headquartered in California. This company, which was founded in 2008, has a flagship anti-malware product. Which is aimed at detecting and removing malware PUPs and other potentially threatening programs on users' computers.

Dave Bittner: Okay.

Ben Yelin: So I think the allegation here is that the Enigma products were not being sold in good faith. In other words, they were presenting more risks to the end user than Enigma was letting on. And I think Malwarebytes was putting out these allegations publicly, which is why it was a violation of the Lanham Act against false advertising.

Dave Bittner: So Malwarebytes was flagging Enigma's software as being a potentially unwanted program, as you mentioned, a PUP.

Ben Yelin: A PUP, yep.

Dave Bittner: And Enigma took issue with that. So, how much is all of this clouded by the fact that these two companies are competitors?

Ben Yelin: I think the judges both rightfully dismissed that as a major factor. At least in this portion of the case. Previous litigation on false advertising doesn't really make a distinction when we're talking about competitors versus just other interested parties. I think there's kind of an assumption that competitors might use our legal system to go at one another. And we should still decide these cases on the merits, no matter the motivation. Certainly a relevant factor, but I think it didn't have a major impact in the majority opinion here.

Dave Bittner: Okay. So you mentioned at the outset, this is a zombie case. This has been going on for a long, long time. Where does it go from here?

Ben Yelin: Yeah, so this is where things get really messy. So all this decision said is as a matter of law, this lawsuit should be allowed to continue. So a couple of things could happen here. One is that this case could be reheard en banc by the entire Ninth Circuit Court of Appeals.

Dave Bittner: Okay.

Ben Yelin: This is only a slight exaggeration but the Ninth Circuit has like 100 judges. So, maybe more like 25, if I were to be exact.

Dave Bittner: Okay.

Ben Yelin: So, it's very uncommon that that large of a panel would rehear the case en banc. It could be appealed up to the Supreme Court. I don't think this is something that the Supreme Court necessarily would want to touch. Especially since we haven't seen this issue raised in other circuits. So for right now, it'll go back to the District Court to adjudicate both whether this actually was false advertising under the Lanham Act, and the other procedural issues here. One of the procedural issues here is about where they can be sued. Whether the venue was proper. I think because that's going to involve years of discovery and complex litigation that we're probably looking at two to three years before there's any resolution whatsoever on this. I think the takeaway here is that it's a warning to cybersecurity firms that if they don't pay attention to litigation like this, we could see a legal landscape in which companies could be held liable for what they have previously said about whether it's rival software companies, rival cybersecurity forms, or whomever. If they falsely label something as malware. Even if they had originally made the charge in good faith. And I think this could be a dangerous use of the Lanham Act and false claims. And it would have a major detrimental impact on the ability of cybersecurity firms to give honest assessments as to which products are safe and unsafe for their users.

Dave Bittner: So to what degree do you think this is a stretch in terms of using the Lanham Act?

Ben Yelin: I think it's a major stretch.

Dave Bittner: Okay.

Ben Yelin: If you look at the Lanham Act, it has a number of elements. So, in order to succeed under the Lanham Act, Enigma in this case has to allege that Malwarebytes made a false statement of fact in a commercial advertisement, that the statement deceived or had the tendency to deceive a substantial segment of the audience, that the deception was material and was likely to influence the purchasing decision, that the false statement entered state commerce, and that Enigma is likely to be injured. Several of those factors are satisfied. Enigma would clearly be injured here based on Malwarebytes' allegation. Certainly the statement entered interested commerce, we're not talking about something that's localized to California. The deception here is material. I mean, if Malwarebytes is saying this is fundamentally unsafe, that's certainly material as to whether it would dissuade people from purchasing the product.

Dave Bittner: Yeah.

Ben Yelin: But that first factor is just very difficult to prove. You'd have to prove that Malwarebytes made a false statement of fact in a commercial advertisement. And it's just so hard for me to believe that an allegation that something presents a threat or that something is malware or that something is a PUP. It's just hard for me to believe that under the original intention of the Lanham Act, those would be considered as falsifiable factual claims. And I just, I think there's certainly a slippery slope if that's the way the court sees these things. Would some first simply identifying another piece of software as a risk, would that qualify as a false statement of fact in a commercial advertisement?

Dave Bittner: So, I don't meant to interrupt you, but what strikes me about this is that there are lots of bits of software that have multiple uses. Nuclear power can be used for nuclear bombs. Like so, you think about pen testers, penetration testers, the cybersecurity folks who are out there in good faith using tools to get into other people's systems, well in the hands of a bad actor, that could be a dangerous piece of software. So am I in legal peril if I acknowledge that truth?

Ben Yelin: According to this decision, yes, I think you would be in peril if that turned out to be a false claim. When it ended up being adjudicated.

Dave Bittner: I see.

Ben Yelin: Regardless of whether your claim was made in good faith. And I think that's the chilling effect that would bother me here. If people are afraid to give their basic opinions or reason judgments onto the potential risks posed by a product, then that's going to hurt all of us. Because companies aren't going to make us aware of the risks that they see as professionals.

Dave Bittner: Can I, help me understand a basic legal concept here, because my, as always, my understanding is fuzzy. So let's just do a basic example here, okay? I say Ben Yelin is a bank robber and a scoundrel.

Ben Yelin: Fair.

Dave Bittner: Okay. Now what's the legal difference between that and me saying in my opinion, Ben Yelin is a bank robber and a scoundrel.

Ben Yelin: That's a great question. So here's the definition under the Lanham Act and we can try to parse this together.

Dave Bittner: Yeah.

Ben Yelin: A plaintiff must allege that the statement was literally false either on its face or by necessary implication, or that the statement was literally true but likely to mislead or confuse customers.

Dave Bittner: Okay.

Ben Yelin: Or consumers, rather. So, if you made that statement about me, it would have to be in the context of false advertising to be considered under the Lanham Act. So I would have to be selling some type of product.

Dave Bittner: So come buy my legal services because as we all know, in my opinion, lawyer Ben Yelin --

Ben Yelin: Is a bank robber. He robbed a bank.

Dave Bittner: You don't want to do business with him. He's a bank robber and a scoundrel.

Ben Yelin: I think if it was known that that was literally false, then I think that could be a valid cause of action under the Lanham Act.

Dave Bittner: Okay.

Ben Yelin: But to say something that was more of a matter of opinion, like Ben Yelin is a sleazeball, something that's not falsifiable, I think it gets to -- I think falsifiable is the key word here. You could theoretically disprove that I ever robbed a bank. You can't definitively disprove that I'm a sleazebag. You can try, it's been done. But it's not something that's easy to disprove. So I think that's the distinction here. Now, in other areas of the law, that's going to be different when we're talking about things like defamation. But specifically for the Lanham Act and false advertising. That's the standard. And that's really the question that comes in here. Is it a statement that is falsifiable?

Dave Bittner: So, in this case, then, is it up to Malwarebytes to make their case that what they were saying under certain circumstances could be perceived as being true?

Ben Yelin: Yes. So, it's Enigma that initiated the lawsuit. So they have the burden. When this gets back to the District Court.

Dave Bittner: Yeah.

Ben Yelin: Of proving with a preponderance of the evidence that they're correct on the law and the facts here.

Dave Bittner: Okay.

Ben Yelin: So technically it's on Enigma to disprove what Malwarebytes said and to show a 50% plus 1 chance that their interpretation is correct. What I think is particularly confusing about the majority opinion is that the definition -- judge on the majority says that the definitions inherent in something like malware, software used to monitor or gain access to another's computer system without authorization for the purpose of impairing or disabling the system, the judge says that that definition quote, "lends itself to verification." And I just don't know that that's entirely true. I don't know if you could ever verify that. Especially the more subjective aspect of it, which is for the purpose of impairing or disabling a system.

Dave Bittner: Right. Right. And then there's the example of adware in here, which I think is fascinating.

Ben Yelin: Right.

Dave Bittner: Because adware, on the one hand, someone could say hey, this is great, I'm getting customized ads and I only see ads for the things I want to shop for. And someone else could say this adware is clogging up my computer, slowing it down, and making it unusable.

Ben Yelin: Right, which would be, yeah.

Dave Bittner: And both of those things could be true!

Ben Yelin: Both of those things could be true. Exactly. Exactly. I actually thought the adware example was particularly apt here. And was a good way of viewing this.

Dave Bittner: Yeah. So where are we going?

Ben Yelin: So if I had to predict, I think this is going to go back down to the District Court. The District Court is going to have a lengthy proceeding where they go through all the procedural issues here. And then presumably they'll make a decision on the merits. Like I said, all this decision said is that the lawsuit is allowed to continue.

Dave Bittner: Okay.

Ben Yelin: There's a valid basis of law, assuming all of the facts that the plaintiff alleges are true, as a matter of law, this case is allowed to continue. So I think it goes down to the District Court, they will adjudicate this on the merits. This plus the other, you know, millions of issues that have been invoked in this zombie case.

Dave Bittner: Right.

Ben Yelin: It will make it back up to the Ninth Circuit. Where this interpretation on this particular issue will be binding, but there might be interpretations they need to make on other relevant issues in the case, including all the other elements of false advertising allegation. Because this falsifiable statement thing is only one element of the Lanham Act. You have to fulfill all of the elements. So there's going to be litigation at the District Court on all of those questions. It will make it back up to the Ninth Circuit. Maybe they will decide another issue. Or they'll say the District Court erred in doing X, Y, Z. We're sending this back to the District Court so that they can correct that error. And then maybe, I don't know, 2030, 2040, we might get final resolution of this case.

Dave Bittner: Fun, fun, fun.

Ben Yelin: After the literal zombie apocalypse.

Dave Bittner: Yeah. It just seems to me like oh, I guess I can't help wondering to what degree this is given life just by the bad blood between the two companies.

Ben Yelin: Yeah, I mean, they really hate each other now. And this is just -- they've now been battling in various courts for six years. With, you know, no sign that this is ever going to end. Yeah, there must be a lot of animosity in those courtrooms.

Dave Bittner: Alright. Well, this will be interesting to watch it play out. Maybe in Season 12 of Caveat, we'll have a resolution here.

Ben Yelin: I know. We always say we're going to come back to stories. And I really mean it. If, you know, if you're in your 70s and I'm in my 50s and finally we get a resolution on this case, we will be back and we will give it to you.

Dave Bittner: Alright, fair enough. Okay, well we will have a link to that story in the show notes. My story this week comes from the folks over at Ars Technica. This is written by Jon Brodkin. And this is about a federal judge saying "No AI in my courtroom unless a human verifies its accuracy." So this is US District Judge Brantley Starr, that's a great name.

Ben Yelin: Yeah, it sure is. Judge Brantley Starr.

Dave Bittner: I mean, that's like essential casting kind of name. It's a good name. He is a Trump nominee in the US District Court for the Northern District of Texas. And he says that there will be no AI in his courtroom. He said "AI platforms in their current states are prone to hallucinations and bias." So, what prompted this from Judge Starr was the case of a lawyer in New York --

Ben Yelin: I'm still laughing about this case, by the way. I'll let you describe it, but I've been laughing about it for a week.

Dave Bittner: So this is a lawyer named Steven Schwartz who admitted that he had used ChatGPT to help write court filings and that ChatGPT cited six nonexistent cases that ChatGPT invented.

Ben Yelin: What gets me about this, we've talked about the idea of hallucinations in ChatGPT, but like the level of specificity for this completely false information. They're not just listing case names, but they're giving like full procedural histories and full Blue Book citations. Like they're creating just an entire universe of false information that seems real because they've dotted all the i 's and crossed the t 's. It's so bizarre and it's so funny.

Dave Bittner: Yeah. How could a lawyer do this, Ben? I mean, you think, lawyers have assistance for these sorts of things, right? To fact check their filings and so on and so forth. Am I correct?

Ben Yelin: I think this lawyer probably has a lot on his plate. Has a million filings that he has to take care of. And he thought he could cut corners on this and didn't realize that ChatGPT has these hallucination problems. He apologized.

Dave Bittner: Yeah.

Ben Yelin: He says he, quote, "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity." He said that this is the first time this has happened. Sure. Although, it's still a new technology, it's certainly plausible this is the first time he tried to use it.

Dave Bittner: So he's awaiting punishment or possible punishment from Judge Kevin Castel. From the US District Court for the Southern District of New York.

Ben Yelin: Once he's done laughing, he'll be able to issue sanctions.

Dave Bittner: When he gets up off the floor. Gosh, I mean imagine being Mr. Schwartz's client and having this happen.

Ben Yelin: Yeah, I mean, this is, for you Arrested Development fans out there, this is something that Barry Zuckercorn would have done.

Dave Bittner: Yeah.

Ben Yelin: It's just too perfect.

Dave Bittner: Interesting quote here from Judge Starr from Texas. Who says, "AI systems hold no allegiance to any client. The rule of law or the laws and Constitution of the United States. Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why." That's a powerful statement there.

Ben Yelin: I think I'm not going to try and get too touchy feely here, but I think this is beautiful. And I think this is something that other judges should adopt. I think he's getting at the distinction between AI and human beings. Which is getting increasingly blurred. And this can be a widely used definition. No allegiance to any client, the rule of law, or the laws and Constitution of the United States, unbound by any sense of duty, honor, or justice. Those are things that are inherent in human beings, inherent in legal ethics, inherent in the ethical and moral duty to advocate on behalf of your client. That's attendant in the practice of law. And that's something that AI could never, or certainly in its current form, just simply could not do.

Dave Bittner: Right.

Ben Yelin: And so I think this is something that's very important. I'm glad that this judge has issued this statement. I'm glad he's put forth this rule in his courtroom. I hope it's widely adopted, I think it could be a very useful model for judges across the country. Especially now that we know that ChatGPT in particular has these hallucination problems. Certainly clouds the reliability of using AI software to do legal research. Which I know I'm going to see it among my students, and sometimes I would trust a student who cites a case. That will not happen going forward. I'm going to make sure that that's a case that actually exists, even if the citation is perfect, even if the procedural history sounds plausible, we're all going to have to double check our work.

Dave Bittner: Right, and you can't rely on ChatGPT to double check the work, right? Like you as a professor can't say hey, ChatGPT, is this real? Because ChatGPT will say sure it is.

Ben Yelin: Although to get super meta, like maybe ChatGPT will spit out false information but maybe it can also identify that the information's false, I don't know. I don't trust it.

Dave Bittner: So let me ask you this. So there are, obviously, this is a case where ChatGPT is well documented for going off the rails. And it hasn't just happened in these legal things. We've seen all sorts of things about making up things that simply aren't the case.

Ben Yelin: Right, with extreme confidence.

Dave Bittner: Right, right. Yeah, they say it's --

Ben Yelin: Definitively.

Dave Bittner: As a service, right? So, however, I think there are also things that it is quite good at. It is quite good at summarizing things. In other words, if I give it a paragraph full of facts that I've checked as a human being and I say hey ChatGPT, this needs to be shorter, can you summarize this? It can do that quite well. In a case like that, where I'm not relying on ChatGPT to do my research, but I'm relying on it to basically be an editor, am I obligated to share that with the judge?

Ben Yelin: I think under this judge's instructions here, you are obligated to share that. I don't think that's a particular burden. I mean, if you can really cut down the information that you have to read to prepare for a lawsuit by crunching a 50 page document into a 500 word summary.

Dave Bittner: Yeah.

Ben Yelin: That's a useful service and if you spit in true information, at least there's a decent chance that it's spitting out true information. You're not asking it open ended questions where it's going to use its imagination. So I think in that case, the use here would be permissible. You would just have to get the judge's permission. And you could be sanctioned if you used generative AI tools and did not ask for permission under these circumstances.

Dave Bittner: Right. So you, in your opinion, Judge Starr is on the right track here and has done the right thing. And this is a model the other courts should follow.

Ben Yelin: Absolutely. Absolutely. I think it's something that I'm not just recognizing, but I've seen it quoted in articles from legal experts. As this is very promising. I think the case in New York was the warning shot. That somebody's going to try and do this. We caught it this time. But what would happen if this was a criminal case and somebody was convicted based on precedent from cases that didn't exist?

Dave Bittner: Right.

Ben Yelin: The stakes are very high. So I think if we're going to be using this new technology in a legal system where oftentimes it's a matter of life and death, freedom or lack of freedom, that we should be sure that we are verifying the factual claims that we are presenting.

Dave Bittner: Yeah. Alright. Boy, that's an interesting one, isn't it? Alright, we will have a link to that in the show notes, of course. And we would love to hear from you if there's something you'd like us to discuss here on the show. You can email us. It's caveat@thecyberwire.com.

Ben, I recently had the pleasure of speaking with Larry Whiteside Jr. He is the Chief Information Security Officer of a company called RegScale. And we're discussing GDPR on the occasion of its fifth anniversary. Time flies, huh? Here's my conversation with Larry Whiteside Jr.

Larry Whiteside Jr.: If you think about five years ago, privacy was still this yeah, you know, because privacy is a thing that consumers have to care about. This isn't about corporate threat actors attacking some corporate entity. It's really about consumers taking pride and really owning the fact that the data that companies gather about them, they care about. And they're concerned about. And they're concerned about what happens to it, how it's used, and the potential of a breach of someone getting it that shouldn't have it, right? And if you think about it from an American standpoint. And that's the lens I sort of took with GDPR, is in America we don't really care that much about our personal data. We like to think with do, we like to get on social media and say we do, but when you look at the actions of the majority of American citizens, they do not do the things that show and demonstrate that we care about our personal data. So with that, when I thought about GDPR being enacted in the UK as it was, I wasn't sure what that impact was going to have A, on the US, but on the, I'll say, the data and privacy scene as a whole globally.

Dave Bittner: And how has that played out in your estimation as you've watched it take effect over the past five years? Do you think it's been effective?

Larry Whiteside Jr.: Oh, it's absolutely been effective. And one of the things that is always a measure of effectiveness is this. It's copycats. How, when you do something, how many others begin to copy it or start to follow suit in a similar fashion? And if you look at GDPR, and what happened with PIPEDA being established in Canada. CCPA being established in California. And there are many other privacy laws in the works across many different states in the US. It was a force multiplier. Because people started to say hey, wait, what? We can have this? And state legislation, right, in the US started saying listen. In order to really take care of my citizens in my particular state, this is something that may have a positive impact to allow a state to hold companies accountable. Right? It's similar to when there were a number of different healthcare regulations that came out around HIPAA back in the day. Well, Massachusetts then decided to take that a little step further. And then Massachusetts sort of started a firestorm where many other states began to start doing similar things to hold companies more accountable. I think GDPR is that sort of privacy panacea that is beginning to put the world on notice as it relates to hey, people are starting to really care about their data and what companies are doing with it. And how they're utilizing it. Especially because, as we've gone through a number of things where it's been identified that there are these large data banks that are being gathered and put together that organizations are utilizing, and then selling that information to others, it's just started to make people way more aware of how much data and information about them is out there.

Dave Bittner: It's really an interesting notion I think that in some ways, GDPR showed the rest of the world what's possible. And that you could do this and it was able to be implemented. There hasn't really been any GDPR trainwrecks that I can think of.

Larry Whiteside Jr.: No, there haven't. And they've been successful. I mean, when you look at and you go through the list of organizations that have had very hefty fines. These are not small organizations who don't have big privacy programs, big compliance programs, big security programs. These are large organizations that have spent hundreds of millions of dollars potentially in security and compliance and privacy. And are still falling prey to it. Getting hit with fines, right? So whether it's meta or alphabet or you name it, they are getting impacted by this. And it's actually a driving change.

Dave Bittner: It's interesting to me that GDPR encompasses the European Union. And yet, here in the states, we haven't been able to really have much effective movement at the federal level. It's been all at the state level. What's your perception on that? Is that, you know, those darn Americans and how we do things, or?

Larry Whiteside Jr.: It is. I mean, that's sort of par for the course. If you look at how legislation happens in the states, doing it at the federal level is turning a tanker in a pond. Right? And so, there is so much bipartisan things that have to happen, right, to bring people to the table together to work. Where at the state level, you can move a little quicker, right? Because they don't tend to have as many bipartisan issues at the state level. A lot of times, there's a lot of similarity across that at the state. So, you can move and work towards things and get things up and over the hill or across the finish line, however you want to say it, to completion at the state level for your state based citizen. But at the federal level, there's just a lot more hurdles, the finish line is longer, and there's a lot more bipartisan rhetoric and things that we deal with in the States at the federal level than they do as an EU parliament.

Dave Bittner: Do you suppose that we are on a trajectory where we will eventually see federal privacy legislation?

Larry Whiteside Jr.: I think with will. I think if you look at the number of executive orders that have come out over just the last, let's say, the last two presidencies around cyber specifically, we now know that the federal government is taking this very, very seriously. Right? There are a number of organizations at the federal level that have their eye on this. That are looking at the impact and trying to drive change internally. But to get to the point of actually having some sort of federal policy around privacy, it's still, it's going to take some time. I do anticipate that we would probably see an executive order prior to seeing some sort of federal regulation. But I do know that privacy is a hot topic at the federal level.

Dave Bittner: Yeah. As a CISO yourself, how does GDPR affect how you do your job on a daily basis?

Larry Whiteside Jr.: Yeah, so for me as a CISO, right, it's all about the data that we collect and what do we do with it, right? And it's also how we protect it, right? Because it's interesting. GDPR, when you think about GDPR, and the controls and things it's asking of you, the way organizations protect their data really hasn't changed a whole lot. We're still trying to protect it, we're still putting a lot of the controls around it. But what it does is it forces you to have more control, right? Now when I say control, that means you as an organization who collects data on individuals previously, prior to GDPR being a thing, prior to privacy being a thing, you didn't worry about the granularity of how you could control access and remove data sets from your database, right? It was this big set of data, you know, you utilized tools to do analytics on it. To glean different things of how your business can be better. But you looked at data as this big data lake. GDPR forced organizations to focus on enabling granular level controlling of data sets so that they could properly remove, right, strike out, get rid of, delete, whatever, needed to happen at a far greater level than anyone had ever really probably thought of prior to that. And that's what it forces us to do today. So as CISOs, we've got to ask our technology organizations hey, with this data that we're getting in, do we have the ability to do these things, right? Do we have the ability to get this granular? And to make those types of low level, structural changes to the data to remove, delete, restructure, or whatever that may be based on the individual use case. And that's the thing that as a CISO, that I stay cognitive of, as we do more data gathering, right, based on the products that we have and based on the customers that we have. As we're doing more of that data gathering, it's ensuring that those underlying controls are always there regardless of how big the data gets.

Dave Bittner: You know, GDPR is evolving. They continue to add new things. You know. There's bills that are making their way through the European Union. And everyone, I think people's imagination has been captured lately by artificial intelligence. And I'm curious from your point of view, how do you have your eye on that? How this focus on that technology has the potential to change things. Perhaps even be an inflection point.

Larry Whiteside Jr.: It does. And the AI battle, I'll call it, is really around this generative AI and its use cases. And how it's used. And what are the rights that the consumer who utilizes it has. Versus the rights of the creator of the generative AI platform has. And where is that line drawn? Right? Because if you look at the current generative AI solution that is out there that are largely being used, they pretty much tell you what you put in here now belongs to us and we can use it however we see fit. And from a consumer standpoint, depending on what that data is, right? Does that conflict with things from other regulations? But if you as a consumer agree to it, you really don't have a leg to stand on as it relates to some other regulations, in saying oh no, I want my data back. But you agreed to that ULA when you utilized it. So there's --

Dave Bittner: The all powerful ULA, right?

Larry Whiteside Jr.: Right, that nobody reads, right? You know, so.

Dave Bittner: Right.

Larry Whiteside Jr.: It's going to be interesting as we move forward, right, that the EU, again, because they move a lot faster than we do in the US, they are looking at an AI act, right? That will define some things on good use of AI to bad use of AI. And what that looks like. And to try and give both their citizens some understanding, but also give companies an understanding of you know, as a governmental entity, here are our expectations of how you use AI. Because no one wants to ban AI. Everybody realizes there's value in AI being utilized. But they also realize that they want to put some boundaries around it of what is a good use of AI. So that companies don't, you know, overstep the rail so to speak. So that they don't go outside the bounds of the line in utilizing AI where it begins to negatively impact citizens in some way.

Dave Bittner: You know, Larry, here we are marking five years of GDPR. I'm curious, as you look toward the horizon, and try to imagine five years from now, do you have any hopes for where we might find ourselves then?

Larry Whiteside Jr.: Yeah, you know, honestly, I am hoping, and for me, I think GDPR, we're sort of a panacea event for it. I'm hoping to see some form of global regulation around privacy and cyber. Right, I'm hoping to see something that is universally accepted across the globe on how we can both hold organizations accountable but also leave bounds with which organizations can then utilize global resources to go after threat actors as well, right? There's this -- one of the big challenges that every organization has, and CISOs have, is when something happens, the component of going back and trying to figure out not just who did it but then trying to bring them to be held to accountability. Because there is so many, we'll call dark holes, across the globe that have not put anything in place to enable outside countries to sort of deal with any threat actors in their environment that happen to live inside their border, so to speak. And so I'm thinking that in five years, that we should begin to see some bow out. Especially with the success of GDPR, around how we come together to create a more global policy framework around those types of things. Right? GDPR, because I'm not a huge fan of these little piecemeal components, right? If you look at GDPR and you look at PIPEDA and you look at CCPA and you look at -- they're very similar, but they have nuances. Right, well the more nuances that you have, the more difficult it is for organizations to meet all the nuances. And we've got this global mapping and all these different things. But it's still work. And so, I'm hoping that as we transition, right, with cyber now being a little over 30 years old as an industry, per se, I think it's time for us to start having some aspect of global regulation that begins to catch up with the times. And enable us to be a better industry overall.

[ Music ]

Dave Bittner: Ben, what do you think, five years of GDPR.

Ben Yelin: Yeah, I mean it's really remarkable that it's been five years. I think if I were to take my overall impression, it has been a success.

Dave Bittner: Yeah.

Ben Yelin: There have been shortcomings, but I think what we've seen from cases from the European Court of Justice is that they're taking privacy seriously. That there are enforcement mechanisms, that however flawed, have teeth. And it's something that frankly I wish we could replicate here in the United States.

Dave Bittner: That was going to be my next question. Was when GDPR went into effect, did you think that five years out, the US would still have no federal privacy legislation?

Ben Yelin: I sort of thought that just based on the inertia that generally exists with our legislative body in the United States.

Dave Bittner: Yeah.

Ben Yelin: But yeah, I mean, it's still notable and surprising. And I think just to reiterate, it's hard for companies to comply with a patchwork of different federal laws plus 50 separate state statutes. It would be much easier for everybody involved if there was a single federal standard.

Dave Bittner: Yeah, absolutely. Alright, well, our thanks to Larry Whiteside Jr. for joining us. We do appreciate him taking the time.

That is our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. You can write us an email at cyberwire@n2k.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at N2K.com. Our senior producer is Jennifer Eiben. This show is edited by Elliot Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.