ChatGPT’s launch last November has captivated the security industry, as the artificially intelligent chatbot’s detailed responses seem ripe for abuse by scammers and cybercriminals. What’s the real threat?
Multiple studies have found that generative neural networks that produce code also reproduce security vulnerabilities in their datasets.
ShmooCon 2023 has come and gone. Now it’s time to consider what the most laid-back infosec conference of the year — boasting the quirky tagline, “Less Moose Than Ever” — can tell us about the security industry heading into 2023.
The fallout of the Log4Shell vulnerability accelerated efforts to require a software bill of materials (SBOM) for the apps, libraries and other digital tools we rely on, but when it comes to generating and using this information, obstacles abound.
Back-to-back security conferences detailed the latest threats posed by malicious nation-states on the one hand and cybercriminals on the other. One takeaway is that cybercrime volumes are more massive and more persistent than the higher profile advanced persistent threats.
At the Federal Trade Commission’s annual PrivacyCon this week, a top regulator and outside experts zeroed in on digital risks posed by the nascent virtual reality industry.
Apple has recently introduced a standalone security research site, significant changes to its bug bounty program and a bevy of security-related updates with iOS 16.
README excerpted this article from “Cyberinsurance Policy: Rethinking Risk in an Age of Ransomware, Computer Fraud, Data Breaches, and Cyberattacks."
Vulnerabilities in nigh-ubiquitous apps like Zoom, Microsoft Teams and Slack, combined with the behavioral changes that accompanied many people’s unexpected move to remote work, have had an outsized impact on security.
Cloud anthropologist Steven Gonzalez Monserrate is no stranger to the mysterious world of data center security, having studied the inner workings of the digital monoliths for years. Here’s what he found from visits in Iceland and the U.S.