Afternoon Cyber Tea with Ann Johnson 2.20.24
Ep 91 | 2.20.24

Insights from GitHub's Chief Security Officer

Transcript

Ann Johnson: Welcome to "Afternoon Cyber Tea" where we explore the intersection of innovation and cybersecurity. I'm your host Ann Johnson. From the frontlines of digital defense to groundbreaking advancement shaping our digital future, we will bring you the latest insights, expert interviews, and captivating stories to stay one step ahead. [ Music ] Today I am joined by Mike Hanley who is the Chief Security Officer and the Senior Vice President of Engineering at GitHub. Prior to GitHub, Mike was the Vice President of Security at Duo Security where he built and led security research, development, and operations functions. After Duo's acquisition by Cisco, Mike led the transformation of Cisco's cloud security framework and later served as the Chief Information Security Officer for Cisco. Mike had also spent several years at CERT/CC as a senior member of the technical staff and security researcher focused on applied R&D programs for the US Department of Defense and the intelligence community. Welcome to "Afternoon Cyber Tea", Mike.

Mike Hanley: Thank you, Ann, it's great to be here with you.

Ann Johnson: Mike, you have had this really fascinating career in leadership roles supporting the US defense and intelligence communities at Duo Security, at Cisco, and now for GitHub, so time in technology also. Our conversation today is going to lend toward secure coding, software supply chain, and what artificial intelligence is doing to be one of those things that every security leader needs to be thinking about. But before we dig into all of that, can we go way back? Tell me, how did you first get interested in cybersecurity? What has kept you into the industry all this time? And tell me a little about your career.

Mike Hanley: If you go all the way back in the vault, I would say really started with tinkering at home, like a lot of people probably got their start in security with. It was an old Zenith Heathkit that my dad had that my brothers and I had a chance to just sort of experience the power of home computing at that point. And then, you know, later on, the 386 that we had, and then the Gateway 2000 that came in the cow-spotted box, if you remember those. And having the chance to have access to computers at home and having a patient set of parents and siblings that were okay when I was taking apart and breaking things on a regular basis. It was really formative in the early days. But then from a professional standpoint, I'd say it really started to take off for me when I was in college because my undergraduate co-op that I did while I was at Michigan State, I worked for Electronic Data Systems, you know, which is, of course, now part of HP. And in that role, I had the opportunity to really be doing tech support for some of their very large enterprise customers. And starting my career in support was such a great place to be because there's a main theme in customer support, you're generally working with people who are not happy because something is not going the way it's supposed to be going and you are in a situation where you're trying to resolve problems or incidents, and get people to a state where they're happy and they can get back to work. And I think you learn a lot of empathy in a role like that where you're really just trying to help figure out how you can take care of people and help them get their jobs done. And that has actually applied really well throughout my career in security. And I think it's very much a theme of my security career over the course of the last 20 or so years. After I got out of EDS and finished my undergrad, I went off to Carnegie Mellon and did something called the Scholarship for Service program, which is -- it is DHS and NSF-sponsored program where you get paid to go to school and then you pay the government back your time afterwards and that was when I got my start at the CERT Coordination Center, which was an amazing time period in my career. But after that, my wife and I always wanted to get back to Michigan at some point, and we did that when I joined, at the time what was a very tiny startup called Duo Security that became not tiny and we had a great acquisition story, it's become part of Cisco and had an opportunity to spend a few years there before coming to GitHub over the course of the last three years. And, you know, it's been great to see a little bit of the world in federal and in a small startup that became a happy security M&A story in the market and big enterprise, and now a big platform that's at the intersection of developing software and the home for our developers at GitHub. But I think what keeps me in it and what's been consistent across all of those is security is just one puzzle after another and I think it's always intellectually challenging, it's interesting. The people who are in the space are always interesting and fun to work with and people that you can learn from, and be inspired by. And that's what keeps me coming back to work every day is the opportunity to have an impact, to learn something from interesting people, to help people get their next leg up in the security career, and just ultimately, bring more smart people into the role because as you know, there's a massive shortage of talent in the industry. So, the more we can do to bring smart people and attract them to our field, and give them an opportunity to make a difference, the better.

Ann Johnson: I think that's great. And by the way, congratulations on all your success too. Duo was truly one of those most successful acquisitions. You know, we do a lot of them in security but it stands out for me. I knew a lot of the folks at Duo also. So, a great company, great technology, and the work that they've done at Cisco is amazing. And I know you were really instrumental to all of that.

Mike Hanley: Thank you. It was definitely a special time in my career.

Ann Johnson: So, at GitHub, you're in this really unique role. Greg and I were talking about it. Greg is a producer of the podcast, so for those of you who don't know who Greg is. But we were talking about this as we were thinking about you as a guest because you're both the executive over leading security but you're also the executive over leading engineering. And in most companies, those responsibilities are split. But as we shift left, it makes a lot of sense, right, bringing together security and engineering is important today, it's becoming even more important for those two functions to be in lockstep. Can you talk a little bit about your role and your perspective about the intersection of security and engineering? And do you think we're going to see more of that in the coming years in leadership roles?

Mike Hanley: Gosh, I hope so, Ann. I hope this becomes a trend and takes off. And I'll talk a little bit about why that is. First, it's worth noting when I came to GitHub, it was originally just to be the Chief Security Officer at the company. So, I took on the security program. We had the opportunity to take some amazing people, some amazing capabilities, technologies and really just invest heavily in expanding that to support really what we see as the opportunity for GitHub to have a massively positive impact on the broader ecosystem, right? I think our mission's a little unique in that, it's not just keeping GitHub secure and making sure that we build secure products, it's really there's a third pillar to that which is having an immensely positive impact on the security of developers, particularly in open source but also commercial developers by making it easy for them to get to good security outcomes. So, we have some great functions there like GitHub security lab, for example, that are just doing amazing work out in the open source. But the role expanded for me a little over a year and a half ago when the opportunity came up to lead GitHub engineering all out as well and bring those two teams together. And I think it's been very consistent with a thesis that we have, which is security really does start with the developer. And we hear things like build security in, not bolting it on, or we hear things about starting with security. And really what we're saying there is we want to make it easy for developers who are building the technology that's part of our daily lives to be secure at the furthest point left in the lifecycle. And oftentimes, when you think about a typical security team's engagement, they might get brought in for a review or they might throw some requirements on and do a pen test at the end of things. But when you have security involved, not just at the beginning but really throughout the entire process of building and developing a product, and not even that, you have security engineers that are building technologies to make it easy for the software engineers to have great security outcomes, even when the security needs are not there. This is incredibly powerful in terms of the returns that pays down the road and the security of your platform, the types of experiences that the people who are using your platform have, etc. So, I think actually bringing the two roles together for us was only natural because, A, security is core to everything that we do. But, B, we want to make sure that it's easy for our engineers to be creative to take well-informed and smart risks but have the right guardrails and the right support systems that are there. And I mentioned a moment ago, and, of course, you're well aware of this, I believe we covered this on the podcast before, but there's a talent shortage that we report several times a year in basically every major industry report that we see across the security space. And the reality is, you can't and really don't want to have somebody shoulder-surfing every single one of your engineers. You want to empower them to do the great work that they came to your company to create. And I think having the security and engineering teams actually beyond one roof and under one leader has really, really helped us drive a culture where we really think all the time about being always secure and always available and always accessible to our user base. So, it's really been a productive thing for us. I hope that other companies will look at adopting trends like that. And I think that's one of the best ways to really drive security left is to actually just collocate those functions.

Ann Johnson: I think that makes perfect sense. It's logical. And as I mentioned earlier, as we think about shifting left, there's really nothing more important right now in security, and you can argue this, right, I'm going to make a declarative statement, which I normally don't, and I know it's up for debate, but our ability to attack vulnerabilities before they're written and at the time are written, is going to be one of the most important things we do to strengthen security because the nation-state actors, the cyber criminals attack vulnerabilities, as much as they attack cyber hygiene, they also attack vulnerabilities. So, having these events together and having these roles together is incredibly important.

Mike Hanley: That's exactly right. And I would go a step further and say if you think about where the most work that's done, that's security-critical, it's in engineering. And so, in reality, what you want is actually the best security work to be happening not hidden away in security but you actually want the best security work to be happening in engineering where it's visible through the experience that your customers have. And that's been a real shift for us. That's been very positive. And again, I'm hoping to see other organizations continue that trend.

Ann Johnson: Fantastic. Well, for the CESOs, the Chief Security Officers, the security leaders listening in, the engineering leaders who listen to the podcast, I think most will agree with everything you said. There's also this huge need to improve the software development lifecycle to ensure that software and code are secure from the very start. So, the leaders I talk to get often tripped up on the how. They ask me questions like how do I maintain the productivity of my engineers while enabling them to build more secure software. Or how do I skill or upskill the devs that were not trained in security? I'm curious about what mental models that you use when you think about these challenges, which strategies you've put in place, how do you recommend security and engineering leaders think about it and how do you think AI is going to change all of that.

Mike Hanley: Yeah. I would start by saying, first off, I think as the security leader when you have to wrestle with some of these hard questions, the most important thing to remember is that you're running security but you're really one of the company operators, first and foremost. And you're trying to figure out how do I employ the resources, the authority, the remit, the mandate that I have, as the person who's responsible for security in such a way where it serves the business' objectives, which includes risk management and like not having a bad time from a security perspective. But it also includes shipping products. It includes closing the books and finance on time. It includes making sure that people can access HR systems when they need to. And that I would basically summarize as you want to shift the thinking of the security team and function from being the department of no to the department of yes and. And any time somebody comes to me and says, "Hey, we want to do this. Like we think this is important to -- " let's just say it's the finance team. I find that when you start the conversation with yes and, and the and is followed by how can we do that safely, how can we do that in such a way where it protects the customer information, how do we make sure that's consistent with our security and compliance needs as a company, how do we do that while protecting our employees and our intellectual property. The conversation is very different from when you just say no. In fact, often when you say no, that's a conversation killer, not a conversation starter. And it gives you I think as a security leader an opportunity to learn more about what the business is trying to accomplish. And I think when you have that mindset and you're trusting your counterparts and your peers in other parts of the organization that they know what's best for finance, for marketing, for sales. And you can bring your expertise to bear on security to meet them in the middle and find the solution that works for everybody. Those are the most constructive conversations that I find myself in. So, you mentioned one example of how do I make sure that I upscale developers who aren't trained in security. A great thing that we do inside GitHub is we just ask them, we say how confident are you in your use of our security tooling, or what would you like to learn more about from a security standpoint. And, of course, we can marry that to the information that we have where there are things that we view as needs based on our understanding of the requirements that are placed on us. But the engineers will tell you or the finance people will tell you what they need or what they're trying to accomplish. And the real benefit of this is, of course, you're meeting them where they're at. But on top of this, it also makes the security team a more accessible entity, right? You don't want to be the person where when you show up in a conference room, everybody scatters because they go, "Oh, no, the security people are here." You want people who actually go to security and ask for help and that tell you when they see something is wrong. And it's building that trust, building that transparency, not just saying no but saying, "Hey, I think there's a better way to do this, and here's why I think that or how that might help manage the risks of the company." It's an opportunity to just build that shared understanding. And I think in a company where you have a shared understanding of what's important from a security perspective and that people can work together to meet those goals, that is where a lot of real magic can happen. So, I think just having an open mind to learning from others in your team, assuming that they've got the best interest of the company in mind, and then working to find solutions to what their needs are, that are consistent with the security needs of the company, that's the approach that I would recommend taking and that has served me well.

Ann Johnson: I think that's really, really pragmatic advice for folks. And hopefully, they listen and think about how they can implement that within their culture and within their environment. Can we talk a bit about the supply chain, software supply chain?

Mike Hanley: We can, absolutely.

Ann Johnson: It's a hot topic. The leaders I've talked about are having a really hard time understanding the breadth and depth, the risks from their software supply chain. I'm sure you're hearing the same, I'm sure you're dealing with the same. How would you advise organizations that are on the journey to better understand the supply chain? What questions and prompts to use to get your peers in security in the right mindset?

Mike Hanley: You know, first off, Ann, what a wild few years it's been for this topic, I would say. This was actually part of the reason I came to join GitHub was because I saw a huge opportunity to have an impact in this space, just given that GitHub is the home for open source and for so many commercial companies that are doing software development. And supply chain has been a mainstream topic really for the last three years. If you look back at SolarWinds and then Log4j, some of the other incidents that we had, I think people have a better understanding of the ripple effect that one library that's been developed by a handful of people we depend on in everything from your self-driving car to your refrigerator, to your cell phone. And I think this is good because it's probably merited more attention but I think to the point of your question, something that people still struggle with is like, well, great, well, what do I do about it? It's not like I cannot use somebody else's software or not use this popular library. And I think this is where supply chain I often would go back and say, we've got to really just zero in on what some of the basic elements are that we think about in other disciplines of security. And a challenge that we face in this space right now is there are a lot of solutions but I think a lot of people don't actually understand what the problems are that they're trying to solve. And this isn't meant to critique any particular standard or solution, but you have to look at those as tools that are available to you in the toolbox, whether it's something like SBOM or like the tough framework or something else like sigstore that's gotten traction in the space. All of these things serve a particular purpose, but none of them are a quick fix to our supply chain challenges. So, for organizations that are just getting started with supply chain, I think it's taking an expansive view of what are all of the things that go into the products that we make and ship. Because it's not just the code that you think you're packaging up, it's the systems that you use to build that code, as we've seen in relatively public incidents. It's the communities that are building those open-source libraries that you depend on and the level of security that they have in their projects or even within the communities that are contributing to those projects. It's the developer's laptop and whether that becomes a channel for attacking the product that you're building. So, the supply chain, I think we often think of it as very narrow and that gets us into the trap of thinking about, okay, well, if I just have like these two tools or these five standards, I'm good to go. But it's actually a much, much more expansive challenge. And this is frankly why we decided to require two-factor authentication across the board for everybody who contributes code on Github.com. It was precisely to help make a dent in the supply chain space because, as I mentioned a moment ago, pretty much everybody is using open-source software to some degree. Adversaries are well aware of this. We see them going after open-source projects that are popular and we want to make sure we do everything that we can to help protect and sort of raise the bar for standards of security for those communities. Really to protect them but also to protect the people who are dependent on those communities and the software that they're making. So, that's one thing that we're able to contribute to. But, of course, there's just a variety of work that's going on out there. And I would definitely encourage people to really just sort of think through in the royalist sense all the things that go into their supply chain and just try to break that down into what are the most urgent problems that they can start with. And maybe it's SBOM because they just want to have an inventory of what's there. Great. That's a good place to start. But really thinking through those most critical assets and most critical questions that they want to answer, and then figuring out what the right tools are that are in their belt to address those is what I would recommend people go through. But the big caution that I would say here is not taking too narrow of a view of the problem. You know, SBOM can help you with inventory, but again, if you don't think about the security of your built systems, which are just as important as your production systems for operating things in your service, you've got to make sure that you really contemplate all those areas, think through what the potential challenges are, what the most important assets are that you're trying to protect, and really what questions you most urgently and pressingly need to answer about those securities. Keeping that big view in mind of the supply chain is really, really important.

Ann Johnson: I agree. And you said something during that conversation about a topic that I'm incredibly passionate about which is a great place to start because we're also going to talk about cyber hygiene a little bit. You talked about how you are now requiring multifactor authentication for people who are contributing to GitHub. I talk all the time about you must require multifactor authentication for every single person that is accessing your organization 100% of the time. And as we move into the topic of cyber hygiene, we need to understand SBOMs, we need to understand supply chain security, and obviously, it's a big risk because we've seen in a lot of recent events. But we also need to understand cyber hygiene, things like multifactor authentication, least privilege, patching, updating, having a program that you understand what can and can't be patching and updating, do we have an encryption strategy, all of those things that are essential in building out a security program. When you think about those things, and you think about policies, building a security culture, SBOMs, how do you suggest that your peers might prioritize?

Mike Hanley: Yeah, the place I always start in as I think about the brilliance of the basics, and that really starts at the foundation. If you remember, I think back in the early twenty-teens, Forrester had this security hierarchy of needs pyramid. I'd love to go back to that because I think it still applies today. But, you know, the bottom of that they have do you actually have a strategy, right, do you understand what you're trying to do here? And then on top of that is, do you have the people and are they empowered, are they resourced, are they trained, are you supporting them? And then you get into the basic, controls and policies and procedures. And, you know, near the top of the pyramid is some of the more interesting and dynamic parts of security like detection and response, and threat intelligence and things like that. And it's not to say that those things are not good, to be clear, they all are. All of that is good and virtuous, it all accrues to certainly if you get all the way to the top of the pyramid a great security program. But the challenge that I repeatedly see in a lot of organizations is they do this stuff at the top of the pyramid before they do the stuff at the bottom of the pyramid. And unfortunately, this is just one of the challenging counterincentives that we have in our industry is we talk a lot about some of the flashing light stuff but also, as you know, Ann, it's 2024, and we're still telling people to turn on two-factor authentication. We still have to have that conversation despite the fact that we know phishing and account takeover is always at the top of the Verizon Data Breach Report and at the top of a lot of the other lists that we see about leading causes of cyberattack. And, by the way, you don't need to look too much further down the list to see things like patching and vulnerability management. So, none of these things are -- they are not necessarily the most interesting problems in security. If you're just looking for like what gets sort of the most eyeballs on the story. But the reality is like I think getting, for example, 2FA turned on for tens and tens of millions of people is a very difficult challenge. And even getting that done across GitHub's platform took the better part of the year to even do the research to figure out how we were going to approach a problem like that. So, you know, I'd say people shouldn't discount those foundational controls, those basics, those must-haves, that always increase the attack cost for a bad actor and getting that stuff, not just right but getting it really complete and sort of wall to wall across the board, if you will, before they move too far up the pyramid. An analogy that I sometimes use with people is if you get the latest state-of-the-art infrared camera with motion tracking on your house so that you can find exactly who and then, you know, do the facial recognition to identify exactly who broke into your house, but you didn't put a $10 lock on the door to keep them out when they got to the door, did you make a good spending decision? And I mean, maybe people have variations in their answers but to me, it's like, of course, you should just put the lock on the door first, and that most of the time that's a deterrent. It at least makes it harder for somebody to break in, in the first place. But unfortunately, this is just one of those challenges of how the incentives have tended to work in the security space. And my advice is cut through a lot of the noise, cut through a lot of the flashy stuff that you see in security. And if you get the basics right, and you really get them wall to wall, you're really going to meaningfully shape, not just the security of the organization but you're also going to really shape the culture of how people think about prioritizing those fundamentals across the board.

Ann Johnson: I think that's right. And I talked to CESOs and security practitioners and I used the exact same analogy you used which is you can put an alarm system and cameras on your house but if you don't lock the front door, it really doesn't matter. It's the same as, you can put all kinds of sophisticated monitoring controls and the latest widget from any vendor, including us, by the way, in place, but if you don't have good cyber hygiene, and you've left the doors unlocked, none of that actually matters.

Mike Hanley: That's exactly right.

Ann Johnson: Let's talk about two things that relate back to your world you live in today, which is citizen development and artificial intelligence and their impact to the software supply chain. There's this rise in citizen development, people who, like me, who haven't written a line of code since college, suddenly we can have this low-code or no-code experience and develop lightweight applications on some really powerful tools. I think it's fantastic. But is it reaching this point where anyone can do it and what is the risk to organizations? What do you think the unique challenges are and how do you think things like gen AI will play into that?

Mike Hanley: I think the idea that you are basically improving access for people who want to be developers is a net good thing. And we see -- I mean, even just by the growth of the number of people who are on GitHub's platform or the number of repositories or projects that we see, or even the emergence of, to your point, some of these low-code no-code frameworks, open-source projects, commercial businesses that are built on top of them. But this is generally good, right, because you're generating new economic activity, you're giving people access to being a developer who may be coming through from a different path or like a non-traditional route to being a developer or new path to being a developer that we haven't seen before. And I think this is exciting. Now, the challenge here is, of course, always you have to balance some of the freedom to create the range of things that people can create while also making sure that you're supplementing the right security guardrails and controls behind that. So, the idea that I can start from scratch with no development experience and get a low or no-code application deployed, certainly you have to set some expectations I think on the platform provider in that case to do some of the security leg work for you. And I think we saw some signaling that, at least in the public sector, that the current White House administration, the Biden administration sees things similarly in the sense that the national cybersecurity strategy basically said platform providers, basically if you're one of the big software companies or if you're in a position where you're a platform for other people who are trying to create that you have sort of more of that burden on you, right, to help people get security right. And I think that's generally the right approach, right? So, for all these companies that are sort of in this low-code, no-code space, I do believe they have a responsibility to help make sure that while they're making it easier for people to create who maybe haven't created before, they are providing an appropriate set of guardrails to prevent people from making unintentional mistakes, which would clearly be unintentional in those contexts, right? If you're not familiar with the security space, you're not going to know when you're unintentionally having one of those effects. So, I think that ends up being actually really key pieces, building it in such a way where it's hard for people to get a bad security outcome. Now, if you look across the security tool landscape, or even beyond security tools, we know that like historically, this is actually really, really hard. And the example that I give people is -- I believe it was back in 1999, there was a Ph.D. student at Carnegie Mellon at the time by the name of Alma Whitten, and the paper she wrote was called "Why Johnny Can't Encrypt?" If you remember that one? And it was the useability study of PGP5. For those of us who have been around long enough to have used that. And the finding in the study was basically that like even when we make security tools or tools that are designed to have security effects, oftentimes security practitioners don't really design them for like security. And for the person who doesn't know anything about security, if you build a security tool, you really have to think through like how are they going to experience this. Are they going to be able to get the outcome that I want? And do they understand the hinting or the features that I'm making available to them so that they can get to a good security outcome? And I think that's where there's an essential nature of thoughtful user research and design on the part of companies or developers building open-source frameworks in that space to help make sure that they understand how people are actually going to use their stuff and that it is as easy as possible for people to get the good security outcome and that there's probably like a decent amount of friction between them and a bad security outcome.

Ann Johnson: We talk a lot about the concept of digital empathy and digital empathy for security meaning that you cannot expect a user to know everything. And if a user clicks on a single link and it brings down your environment, then you haven't built the right robust systems behind it which is what you were talking about with regard to building those platforms in a way that people can't harm themselves or the company. I love talking about digital empathy. Maybe we'll revisit the podcast again in some point in the future and just talk about those things.

Mike Hanley: I look forward to that. That sounds great.

Ann Johnson: So, we can't close without having a meaningful conversation about GitHub Copilot, about security copilots, about artificial intelligence. More broadly, what do you think copilots are going to do both for coding, right, again, reducing the friction and allowing more people to have productivity and being able to develop applications? But also, what do you think copilots will ultimately do for the security of programming?

Mike Hanley: There's never been a better time to be a developer, in my opinion. And there are a couple of reasons for that. And I'll base those on my own experience with building and operating and frankly using GitHub Copilot. But it's already the most widely adopted AI developer tool ever. With more than a million paid users, it's helping people write code 55% faster. In some languages, it's writing up to 60% of the code that people are working on. I mean, this is really transformational stuff for the experience of a developer. And we talked in the last question about low-code and no-code as being ways to improve access to technology. Well, GitHub Copilot is also a way to improve access for people who want to be developers. And what's cool is, you know, it can be intimidating if you're new to development to maybe open up an editor and know where to start. But if your Copilot is available on a chat interface and you can say, "Hey, Copilot, help me get started with writing a game of Tetris. And I'd like it to be in Python. And I want the following options." And you can just get back the working code from the onset with useful follow-up questions that are going to help you. What a great way to do on-the-job training and a great way to explore. So, for seasoned developers, it has a massive impact on their productivity, but for new developers, this is a whole new way to experience development and learn as you go. And I think it will help train people to get into development very fast. And I think it will probably be a very delightful experience for a lot of people who are getting into that. So, I'm really excited about that and just the general improvement in access to becoming a developer. But to tie that back to security for a second, the benefits are not just on the productivity front, I mean, certainly, the ones that we talked about a lot because some of the stats I just mentioned are big. And by the way, it's early days so they're only going to get better. But on the security front, we talk about shifting left in security, right? Like this is one of the main taglines. I feel like it hasn't been straight for the last, you know, 10-plus years. Well, what is shifting left more than when you're writing code and you're getting more secure suggestions or the vulnerability that you wrote is getting corrected in real-time by your AI peer programmer, or you're asking questions about are there any security vulnerabilities in the code that I just wrote and getting feedback there from your assistant. That is a big deal because shift left is traditionally, in most cases, meant you're getting security feedback sort of at CICD time, right, which is great because it's before you get to production. But let's be honest, by the time your CICD is run, depending on what you work and what your internal building experience looks like, you probably had lunch and like moved on to something else by the time you get feedback on what you just spent the previous two hours on. And this I think is one of the real game-changers from a security standpoint because, not only are we catching bugs, we're preventing them from ever really being written in the first place, let alone worrying about incident response and containment. And that's going to be I think a really, really transformative, not just from securing software generally, but also just from I think a customer expectation standpoint, if you're going to want this, you're going to want to make sure that you've had your AI superhero who happens to be an expert at security right there with you as you're doing your job. So, I'm very, very excited about the impact that that is going to have on the broader ecosystem. And just as an anecdotal data point, I mean, we've been using GitHub Copilot inside GitHub for close to three years at this point and we haven't needed to scale the security team as a function of the additional lines of code that are written by Copilot. On the contrary, I think we generally find the Copilot runs very high-quality code and we have very happy developers internally. So, some of the myths out there of like, well, is AI going to write more bugs? I mean, I think the reality is the productivity benefits and security benefits are only going to get better with time. And if you think about we're really only a year and a half into this journey of having this available as a commercial offering, we should all be really excited and really bullish on where this is going to be in one, three, and five years from now. I think this will be probably one of the most transformative things that we see in security in our lifetimes.

Ann Johnson: I do too. And the best part about it in my mind, not only does it obviously produce the security in real time, it doesn't reduce the productivity of the developers or cause friction so they're going to use it because they can meet those deadlines that they have.

Mike Hanley: Yeah, exactly. And that is the classic complaint, right? Because so often the security experience, I have to go out of the flow of whatever I'm doing and go look at something else. And this is right there with you in the flow which is the best way for developers to interact with security experiences.

Ann Johnson: Absolutely. So, Mike, I talk all the time that I'm a security optimist because I know that for every event we see in the news, there's thousands that we have detected and stopped as a security ecosystem and community. Can you tell us a little bit why you're a security optimist and what your perspective is on how we're going to continue to come together and defend the digital world?

Mike Hanley: Yeah, there have been a couple of key things for me in the last few years that have made me hopeful. One, obviously near and dear to my heart with my work at GitHub is I really feel like there's more virality around open-source security than I've seen at any point in the last 20 or so years. The work where companies are coming together in places like the Open Source Security Foundation to meaningfully move the needle on open-source security is very, very encouraging. But what's actually more encouraging from that is we're seeing like real traction in the public sector as well. I mean, after the Log4j incident, the Biden administration hosted the White House Open Source Security Summit and it was an opportunity to, not just be present with some of the public sector officials but really dialog around how are we going to work together to solve some of these problems. And I feel like this is a really significant change in how we're thinking about approaching things because security is a team sport and I've heard you say this on the podcast before but I really believe that that it's not enough for GitHub or Microsoft or Google or one of the other big tech players to do one or two things that they're in a position to do well, it really takes all of us, and it really also does take unique abilities and authorities that the public sector has and that the bleeding-edge research that we get in academia and the collective power of the communities that are building a lot of the things that we depend on. And I'm seeing more of that coming together in a way that to me is very, very encouraging. Now, on the flip side, like I mentioned a moment ago, it is 2024 and we still need to tell people to turn on two-factor authentication. But the good news is we are also still seeing answers to questions like that as well. We saw passkeys really start to get meaningful broad adoption in 2023. And that is, you know, we might be on our way now to finally putting the password out to pasture which would be great. So, I am encouraged that I feel like in the last few years while there have been some big landmark incidents that everybody can point to and that were in the news, I also feel like there's been meaningful progress and it feels like most of the time, we're rowing the boat together in the right direction. It's not always perfect but I generally feel like everybody is clear that security is a priority, everybody is clear that security is something that we need to invest in and improve on. And while we're behind today, it does feel like there's a real commitment that people want to be excellent at that. And I'm happy to be a small part of that at GitHub but I'm also happy to see other people doing their part as well to try to move things forward together.

Ann Johnson: Excellent, Mike. Thank you so much for making time out of your busy schedule to join me today.

Mike Hanley: It's been my pleasure and thank you for having me.

Ann Johnson: And many thanks to our audience. Join us next time on "Afternoon Cyber Tea." So, I invited Mike Hanley because he's just a wealth of information. He's the Chief Security Officer and the Senior Vice President of Engineering at GitHub. He was at Duo, he was at Cisco, he has a government background. There's not a lot he has not done. He understands the industry in depth but he also understands a critical part of the industry which is the software supply chain and shift left. So, it's a wonderful episode with just a wealth of nuggets and information for the audience.