A man, a woman and a second man stand in front of a yellow wall and a large screen showing a previously photographed depiction of them and a third man who had appeared on the same screen.
At Roanoke Blacksburg Technology Council's recent event, Defense Against the Dark Arts: (from left) Raytheon Technologies vulnerability assessment analyst Ben Eldritch, RBTC program coordinator Alla Daniel and Virginia Tech's director of future technology Thomas "Tweeks" Weeks stand in front of a screen that includes Virginia Tech's lead penetration tester and security researcher, Caeland Weeks. Courtesy of Thomas Weeks.

You can’t swing a digital cat without hitting a major cybersecurity breach somewhere. It feels like a world of black-hat hackers, with no white hats coming to the rescue.

The white hats are hard at work, too, though, sharing information amongst themselves to prevent breaches that can compromise personal data and finances, among many other prizes that hackers crave.

The Roanoke Blacksburg Technology Council hosted an IT security forum this month, its annual Defense Against the Dark Arts event, at Virginia Western Community College. Two of the presenters were Thomas “Tweeks” Weeks, director of future technology at Virginia Tech, and Caeland Garner, lead penetration tester and security researcher in the university’s IT department. Weeks and Garner chatted a few days afterward with Cardinal News about security in the corporate world, at the university and the personal level.

This Q&A has been edited for length and clarity.

Cardinal News: I just wanted to start with kind of a general question that sort of eats at me, seemingly once a week when a report emerges of a massive hack at an institution that holds millions of regular people’s critical identifying and financial information. And I just wonder: Why are large companies so bad at protecting their customers and clients? 

Thomas Weeks: Well, a lot of companies don’t want to spend the money or time or resources on designing secure systems to start with. They oftentimes don’t put the needed financial resources behind protecting their corporate assets or their clients’ assets until it’s too late, and they’re in the news and the media, and their stock takes a hit.

Even companies that do due diligence, that isn’t quite enough anymore. We saw for example in the Target hack that they had secured their point-of-sale systems and kept their corporate credit card information safe. But the black hats, or the hackers that got into the system, came in with a [fake] service, an air conditioning vest on, saying, “We’re here to service your air conditioning system” and came in the back and plugged into the infrastructure there.

So it’s important to have audits so that you’ve secured your systems, but you need to make sure that you’re having some third parties come in … that think outside the box to come in and do what’s called penetration testing of your infrastructure.

Cardinal: You’re partnering with RBTC on Defense Against the Dark Arts, which I unfortunately could not attend. That was for IT specialists.

Weeks: I actually run the IT security forum for the RBTC, and it started off as just a bunch of us guys wanting to talk about things we’re seeing in the wild, and it was very popular and it’s kind of ebbed and flowed. Last year we only had one presenter, but each year we have between three to four presentations on attacks.

We’re seeing in the real world how to replicate them … and then also how to defend against them. And so everyone walks away with some how-to information on how to detect and mitigate and/or prevent said attacks. 

Cardinal: You’re sharing your information about preventing encroachment with the group of people that come in. But I wonder what you’re learning, if anything, from the folks you’re presenting to. As I heard you say earlier, it just started out with guys getting together and talking about sort of real world experiences. Is there that kind of back and forth in these sessions?

Weeks: Yeah, it’s a group of experts talking. So this is not your average high-level, glossy CEO-type talk. This is a room full of IT experts. So it’s getting pretty down and dirty. Very technical presentations typically. Even during my talk I’m like, OK, here’s the X, Y and Z of what I saw on my system. Here’s what I did. What do you guys think? What are you guys doing? What are you guys seeing? And I often get some really, really good feedback from the audience.

Cardinal: Virginia Tech is obviously a pretty massive institution with arms dedicated to the defense industry and intelligence, among other things. I just assume that attempted attacks are relentless. Am I correct? And is that what you spend a large part of your time doing? Break that down for me a little bit.

Caeland Garner: Yeah, I mean, it is relentless. But my whole philosophy is wherever there is a digital device that plugs into the internet, it’s susceptible to an attack. Being especially that we’re a research institution, that definitely puts a target on our back. 

I’m not actually looking for the attacks. That’s blue team side. I’m red team. And in red team, what we try to do is we try to be proactive. It’s called offensive security. Defensive would be blue team. 

We’re trying to be proactive in finding the holes that attackers, other countries, script kiddies [unskilled hackers using pre-existing, malicious tools], nation states, anyone trying to do malicious things on the internet towards us. I’m trying to find those holes and then find remediation efforts to mitigate these vulnerabilities before the attackers can get in. 

But also in what we do, we always assume that if we found a critical or high vulnerability — something that allows sensitive data to come out or someone to have an internal foothold to our internal infrastructure — the attackers have already found it. That kind of ties into your question of looking for these events, looking for these threats. That’s where the blue team comes into play, trying to find, when did this threat happen?

A lot of times we’ll work together. In a purple team episode, we’ll do exercises in trying to reverse-engineer the threat. So if the blue team has seen an incident, the red team will come in and then try to figure out, OK, well, this is what I see on this device. And if I was an attacker, this is how I would go about it.

Cardinal: Tell me about a specific recent threat and how you handled it.

Weeks: We’ll often see when things happen on the internet, it’s kind of like a storm blowing through. When you get some hot new malware or some hot new vulnerability, you’ll see scans across the entire internet start to surge. 

On our network, our security office is always watching the incoming and outgoing traffic, and they’re looking for trends and they’re looking for anomalies. So you baseline your network. You know what the general input and output is. 

If you see a spike going to China at 3 a.m., then you flag that and then you decide to either take automated action or manual intervention, or you just watch. So there’s a lot of things that you look at from a monitoring perspective. And then you have the actual red teams and blue teams who are doing things kind of actively and working in conjunction with our network, network scans and network monitoring.

Some of the big things that black hats or nation states will scan for is Microsoft open ports. Open port is like an open door or window, unlocked door or window in your house. They’ll be scanning for, if there’s a new remote desktop protocol exploit, then you’ll see that surge. As soon as we see those kinds of surges, our security office will make a decision on whether to ride out the storm or to lock things down, depending how bad it is.

Cardinal: As developers make programs, is it really difficult to avoid missing something that can become a vulnerability?

Weeks: Absolutely. A lot of people take solace in running open-source software, but even open-source is vulnerable there. There have been serious, serious bugs with the biggest web server on the planet, Apache, for example, or OpenSSL, which are used to encrypt traffic to these websites and things. 

These vulnerabilities are human errors that have been left in place for over a decade and no one knew about it. And we won’t know about it until we start seeing scans or exploits or systems behaving weird. 

Luckily, the open-source community will often catch those types of things and submit bug reports. For example, a new vulnerability will be discovered and before a hacker or black-hat exploit actually comes out, the open-source community will have already created a patch and put it out there for distribution. The important thing is the next step, is people keeping their systems patched. If that happens, then people are much, much less likely to get bit. But there are zero-day exploits, which are exploits no one knows about, or nation states may have tucked in their back pocket and they never announce. And those are the things that are probably the most dangerous.

Cardinal: So essentially you’ve got a black-hat type out there, working all the time to try to find vulnerabilities and exploit them, whereas on your side, you are also trying to find them and patch them. It’s like a game of spy vs. spy, in a way.

Weeks: It’s a chess game. And the thing is, we have to be right all the time. They only have to be right once. So it’s a challenge. It’s a real challenge. And that’s where we’re talking about resources. 

Companies need to be putting more and more resources into making sure what they put online is secure by design and that they keep up with it. One big fallacy is, well, we secured our new product, our new router or whatever, and put it out there. Yeah, that’s not good enough, because all the software your product is built on has to get regular updates. 

That’s where the “internet of things” is a perfect example of these home routers and network devices and web phones and web cameras, things you put online or put in your home — smart home devices. You don’t think about that Google appliance or that Amazon device. listening to your words. Those things have to get patched too, because if they don’t get patched, they become the big target. 

People don’t think about IoT devices and the things we have plugged in all around us that are never paid attention to. They’re treated as appliances, not as network-attached computers.

Cardinal: I’ve had to wonder why everything has to be attached to the internet.

Weeks: I don’t buy home appliances that can get online, or if I do, I disable them or keep them off because … you’re just expanding your network profile, your security profile online. If it gets bigger and bigger and bigger, you’ve got more and more to protect, and you’ll slip up and forget.

Cardinal: That’s going to lead me into a final question, but I wanted to interrupt that flow for a second because I also realize that in that rapidly changing landscape, there’s a lot of AI now being used for attacks, and I guess that’s a new thing you’ll have to deal with. Could you talk a little bit about that and how you’re approaching it?

Garner: There’s kind of two parts to it. I feel like there’s a social, political mentality that AI is being used autonomously by itself. And we have used the AI tools. They are not autonomous yet. So the idea that actors are out there just with an AI thing that can just break in and do everything — they are there, but [hackers] still need to be technical. 

How I feel that a lot of attackers are using AI is in its ability to code, its ability to debug code. To touch back on something Tweeks said, he had mentioned zero-day attacks. Zero-day attacks are these huge things, or they can be a very small thing that’s easy to fix. But imagine that there’s something that China has got, or some other nation state has got — a little vulnerability in Windows machines that allows them to get all data, anything that you type on your keyboard, it gets them able to record that. Now that’s highly dangerous. 

It’s even more dangerous if they launch the attack and nobody knows how to fix it right off the bat. So until there’s a fix for it on these zero days, there’s that huge span. Where AI is being used is they can now take these drivers, they can take the code, everything in these software applications, everything in OS and new updates. Every time they fix something, they’re looking to break that next update — what did the engineer overlook, and what little bug in their code can they now leverage for that new exploit.

Usually it takes a lot of time to dig in, and it’s kind of like looking for arrowheads. I love looking for arrowheads. You could walk the same little patch and walk by an arrowhead a million times. It’s just like trying to find a bug in code. If it’s just solely relying on your ability, you’re probably more likely four times out of 10 going to miss that error. Now let’s take AI that is [error-free] in code. It’s really, really good at coding. And now you’re just asking it to find the bug. 

A huge, huge vulnerability that we have to be aware of in the future is how quickly AI is going to be able to find these bugs that can be exploited in zero days. 

Weeks: I’d also add from the client side, from the user perspective, [client] operating systems, … phones and devices are being embedded with AI that’s recording your every keystroke, taking screenshots and knows your patterns. 

That’s the new target, because that’s the gold mine. 

If they can say, hey, show me all Windows systems online that have Windows 11 and exploit the operating system, exploit RDP [remote desktop protocol] or whatever to get in, and then start harvesting that data that Microsoft is collecting — that’s scary because that’s something that’s being trusted. 

People are clamoring for AI on devices and a lot of us security guys are like, I’m not putting AI on any of my systems. You need to prove it’s secure before you start deploying stuff like that.

Cardinal: That leads into my final question. I was hoping that you could talk a little bit about what a regular person can do to protect their own property and what they can do, or not do, in their workplaces to defend against the “dark arts.”

Weeks: Kind of classic: Think before you click. [For example], when you install new apps. I recently saw a system I’m over get compromised, and it was because I didn’t think before I clicked. I installed some Chrome plugin that had malware attached to it. 

When things ask for permissions on your phone, your little free game that you just downloaded for your phone doesn’t need access to your microphone and your camera, and access to your files and your photos. I don’t care how cool the game is, don’t install it.

I’m constantly having to stop my family members from installing apps that are asking for questionable permissions. And you won’t know about it until it’s too late. So it’s better to be safe than sorry.

Same thing with devices. There’s no reason to have your washer and dryer connecting to servers in China, which is a lot of these devices. They’re calling home to apps that are running to back-end systems in China or other countries. Even if it’s in the U.S., you don’t want these devices that are never getting patched to be on your network and calling home. 

I have Blu-ray players and smart televisions. I have a separate network at the house for non-human devices. I have a human network for me and my wife, my laptop, my kids. And then we have a separate, I call it my IoT network that my Blu-ray player, my smart TV all connect to, so they can’t get to the humans and the data on my network. 

So, compartmentalizing. A lot of routers now have an IoT network and a kind of a home network, what I call the “BSG model.” In “Battlestar Galactica,” that’s how they kept the bad guys from hacking, because the bad guys were computers and they hacked the entire planet Earth. The only place they didn’t hack was the Battlestar Galactica, because it was an older system that was disconnected and had compartmentalized all their networks. So it’s a good model.

Garner: We’re in a generation where we all are looking for that easy fix. … Everybody wants like that secret password manager that’s easy. And then they can’t forget that password. Well, you haven’t fixed the core problem, which is ourselves, and you’re relying on something that can be hit. LastPass got hit, and it was a huge debacle when that went down [in 2021]. … A lot of that got dumped into the [password list leak] Rockyou 2021 TXT.

The main thing we can do is educate ourselves. There’s plenty of little online things, and the big one being how does your average person interact with the internet, email and websites? 

There’s so many little things that I tell my parents about that they didn’t know. For instance, when you go to a Google search and you type in — let’s say you have a problem on Facebook. I have seen this before. You type in “Facebook phone number service desk.” Well, Facebook doesn’t have a phone number out there, but someone that may not know that will put it in. And then at the very top you would see “Facebook” and then above it you would see “sponsored.” 

These attackers are out there sponsoring these fake sites of very well-known places. So you can click on it and then it goes to their controlled server. And now they’re controlling everything that you see. They’ve got phone numbers that you can talk to a real person, think you’re at Facebook. But in reality, probably what they’re trying to do is say they can help you if you open a bitcoin account and put $1 in it. 

They do these things all the time. I believe that as we take our own security into our own hands and we just educate ourselves a little bit, that goes a long way.

Weeks: And just to add what you’re saying: The biggest problem we’re seeing right now is social engineering. 

We’ve all had a parent story, right, where Mom got hacked or Grandma got tricked into getting on the phone with her bank with a malware person on the other line sharing her mother’s maiden name and account information. I mean, that’s insane. 

But it all starts through getting you, the human, to do something. People are motivated through fear and urgency. So if you ever get an email or a text or anything that’s giving you this sense of urgency, if Amazon is saying your account is hacked, click here, change your password — don’t click there. Go out to Amazon yourself and check it.

If your bank sends you a text message, which they don’t do, saying you need to go here and change your online security, don’t go there. Call the credit card number or the number on the back of your card or go to the website first. Don’t blindly trust these things that are trying to motivate you to do something. 

We see that more and more, this fear motivation. Now they’re using AI to, for example, record your kids’ voices and then call you using AI [with] a deep fake on your child’s voice saying, “Mom, Dad help me. … I need to get bailed out.” And they’ll take payment in this form or that form or whatever. 

It’s going on left and right nowadays. We really need to be adamant about checking, especially our older folks. They trust authorities and they trust doctors and banks and lawyers and Amazon and Google. They just trust them. We need to not automatically trust.

If it’s motivating through fear or urgency, let that be a little red flag saying, Hey, I should not do the thing they want me to do, but I’ll go do it my own way, a more legitimate way. That’s one way of getting around the social engineering aspect that’s really hurting a lot of our elderly community, too.

Cardinal: Deep faking children’s voices. Diabolical, man. That’s wild.

Weeks: Yeah, it’s happened to several people I know.

Tad Dickens is technology reporter for Cardinal News. He previously worked for the Bristol Herald Courier...