Nearly three in five global board members view generative AI as security risk

Dubai: Proofpoint, Inc., a leading cybersecurity and compliance company, today released its second annual Cybersecurity: The 2023 Board Perspective report, which explores board of directors’ views on the global threat landscape, cybersecurity priorities, and relationships with CISOs. The findings reveal that nearly three-quarters (73%) of those surveyed feel at risk of a material cyber-attack, a notable increase from 65% in 2022. Likewise, 53% feel unprepared to cope with a targeted attack, up from 47% the previous year. Interestingly, recent research from Proofpoint illustrates similar sentiments are shared by CISOs in the Middle East, with 55% of KSA CISOs and 75% of UAE CISOs admitting they feel at risk of experiencing a material cyber-attack in the next 12 months. Half of CISOs in the Middle East believe their organization is unprepared to cope with a targeted cyber-attack. This year-over-year change may reflect the ongoing volatility of the threat landscape, including lingering geopolitical tensions and rises in disruptive ransomware and supply chain attacks. The emerging risk of artificial intelligence (AI) tools such as ChatGPT may also be contributing to these sentiments: 59% of board members believe generative AI is a security risk for their organization.Global board members have those concerns even though 73% view cybersecurity as a priority, 72% believe their board clearly understands the cyber risks they face, and 70% believe they have adequately invested in cybersecurity. The Cybersecurity: The 2023 Board Perspective report examines global, third-party survey responses from 659 board members at organizations with 5,000 or more employees across different industries. In June 2023, more than 50 board directors were surveyed in each market in each of the following 12 countries: the U.S., Canada, the UK, France, Germany, Italy, Spain, Australia, Singapore, Japan, Brazil, and Mexico.The report explores three key areas: the cyber threats and risks boardrooms face, their level of preparedness to defend against those threats, and their alignment with CISOs based on the sentiments Proofpoint uncovered in our 2023 Voice of the CISO report. We found a similar year-over-year increase in the number of CISOs who feel at risk and unprepared, and a closer alignment than before between board directors and security leaders.“The newfound alignment between board members and their CISOs on cyber risk and preparedness is a positive sign that the two sides are working closer together and making progress. However, this growing alliance hasn’t yet delivered significant changes in cybersecurity posture, despite boards feeling good about the time and resources they’re investing to combat this risk,” said Ryan Kalember, executive vice president of cybersecurity strategy at Proofpoint. “Our findings show that it remains a challenge to translate increased awareness into effective cybersecurity strategies that protect people and data. Growing even stronger board-CISO relationships will be instrumental in the months ahead so directors and security leaders can have more meaningful conversations and ensure they’re investing in the right priorities.”Key global findings from Proofpoint’s Cybersecurity: The 2023 Board Perspective report include:? Generative AI has the boardroom’s attention: with tools such as ChatGPT getting much of the spotlight in recent months, 59% of those surveyed view this emerging technology as a security risk to their organization.? Year-over-year comparison shows board members’ increasing concerns about cyber risk: 73% of those surveyed feel their organization is at risk of a material cyber attack, compared to 65% in 2022.? Awareness and funding do not translate into preparedness: 73% of directors agree that cybersecurity is a priority for their board, 72% believe their board clearly understands the cyber risks they face, 70% think they have adequately invested in cybersecurity, and 84% believe their cybersecurity budget will increase over the next 12 months; however, these efforts are not leading to better preparedness—53% still view their organization as unprepared to cope with a cyber attack in the next 12 months.? Board members and CISOs have similar concerns about their biggest threats: board members ranked malware as their top concern (40%), followed by insider threat (36%) and cloud account compromise (36%). This is only slightly different from CISOs’ top concerns of email fraud/BEC (33%), insider threat (30%), and cloud account compromise (29%).? Directors are not completely aligned with CISOs in the areas of people risk and data protection: while most directors (63%) and CISOs (60%) agree that human error is their biggest risk, board members are much more confident in their organization’s ability to protect data 75% of directors share this view, compared to only 60% of CISOs.? Bigger budgets, additional cyber resources, and better threat intelligence top boardrooms’ wish lists: 37% of board directors said their organization’s cybersecurity would benefit from a bigger budget, 35% would like to see more cyber resources, and 35% would like better threat intelligence.? Board-CISO interactions and relationships are gradually improving: 53% of directors say they interact with security leaders regularly. While an increase from last year’s 47%, it still leaves nearly half of all boardrooms without strong CISO-C-suite relationships. Board members and CISOs are generally closely aligned when they do interact, however, with 65% of board members saying they see eye-to-eye with their CISO and 62% of CISOs agreeing. From a regional lens, previous Proofpoint research shows that 63% of CISOs in the UAE and 45% of CISOs in KSA agree that board members saw eye-to-eye with them on cybersecurity issues. ? Personal liability is a concern for boards and CISOs alike: 72% of board directors expressed concern about personal liability in the wake of a cybersecurity incident at their own organization and 62% of CISOs agree.“Board members are taking cybersecurity matters seriously, demonstrating they have no illusions about human risk and the impact cyber threats pose to an organization’s bottom line. They are making strides in their relationships with security leaders, understanding that strong board-CISO partnerships are more critical than ever,” said Kalember. “But this is not a time to grow complacent. Boards must continue to invest heavily in improving preparedness and organizational resilience. This means pushing for even deeper, more productive conversations with CISOs to ensure directors are making informed, strategic decisions that drive positive outcomes.”

ChatGPT phishing fantasies: will AI chatbots help fight cyberscam

Dubai: While ChatGPT had previously demonstrated the ability to create phishing emails and write malware, its effectiveness in detecting malicious links was limited. The study revealed that although ChatGPT knows a great deal about phishing and can guess the target of a phishing attack, it had high false positive rates of up to 64 percent. Often, it produced imaginary explanations and false evidence to justify its verdicts.ChatGPT, an AI-powered language model, has been a topic of discussion in the cybersecurity world due to its potential to create phishing emails and the concerns about its impact on cybersecurity experts’ job security even despite its creators’ warnings that it is too early to apply the novel technology to such high-risk domains. Kaspersky experts decided to conduct an experiment to reveal ChatGPT’s ability to detect phishing links, as well as the cybersecurity knowledge it learned during training. Company’s experts tested gpt-3.5-turbo, the model that powers ChatGPT, on more than 2,000 links that Kaspersky anti-phishing technologies deemed phishing, and mixed it with thousands of safe URLs.In the experiment, detection rates varies depending on the prompt used. The experiment was based on asking ChatGPT two questions: “Does this link lead to a phishing website?” and “Is this link safe to visit?”. The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. The second question, “Is this link safe to visit?” had a higher detection rate of 93.8%, but a higher false positive rate of 64.3%. While the detection rate is very high, the false positive rate is too high for any kind of production application.Question askedDetection rateFalse positive rateDoes this link lead to a phishing website?87.2%23.2%Is this link safe to visit?93.8%64.3%The unsatisfactory results at the detection task were expected, but could ChatGPT help with classifying and investigating attacks? Since attackers typically mention popular brands in their links to deceive users into believing that the URL is legitimate and belongs to a reputable company, the AI language model shows impressive results in the identification of potential phishing targets. For instance, ChatGPT has successfully extracted a target from more than half of the URLs, including major tech portals like Facebook, TikTok, and Google, marketplaces such as Amazon and Steam, and numerous banks from around the globe, among others – without any additional training.The experiment also showed ChatGPT might have serious problems when it comes to proving its point on the decision whether the link is malicious. Some explanations were correct and based on facts, others revealed known limitations of language models, including hallucinations and misstatements: many explanations were misleading, despite the confident tone.Below are the examples of misleading explanations provided by ChatGPT:References to WHOIS, which the model doesn’t have access to:Finally, if we perform a WHOIS lookup for the domain name, it was registered very recently (2020-10-14) and the registrant details are hidden.References to content on a website that the model doesn’t have access to either:the website is asking for user credentials on a non-Microsoft website. This is a common tactic for phishing attacks.Misstatements:The domain ‘’ is not associated with Netflix and the website uses 'http' protocol instead of ‘https’ (the website uses https)Revelatory nuggets of cybersecurity information:The domain name for the URL ‘’ appears to be registered in North Korea which is a red-flag. “ChatGPT certainly shows promise in assisting human analysts in detecting phishing attacks but let’s not get ahead of us - language models still have their limitations. While they might be on par with an intern-level phishing analyst when it comes to reasoning about phishing attacks and extracting potential targets, they tend to hallucinate and produce random output. So, while they might not revolutionize the cybersecurity landscape just yet, they could still be helpful tools for the community,” comments Vladislav Tushkanov, Lead Data Scientist at Kaspersky.To learn more about the experiment, visit's ML team is at the forefront of applying machine learning technologies to cybersecurity tasks, constantly updating Kaspersky products with the latest tech and intel. To take advantage of Kaspersky's expertise in machine learning and stay protected, the company's experts recommend:For corporate cybersecurity, Kaspersky Managed Detection and Response is an essential tool capable of detecting and preventing intrusions in their initial stages. It utilizes advanced machine-learning models to filter out mundane events and sends only alarming ones to professional human analysts. This service enhances a company's ability to withstand cyber threats while optimizing the use of existing workforce resources.Providing your staff with basic cybersecurity hygiene training is crucial. Conducting simulated phishing attacks can also help ensure that they know how to distinguish phishing emails.Lastly, using the latest Threat Intelligence information to stay aware of actual TTPs (tactics, techniques, and procedures) used by threat actors is also recommended to enhance cybersecurity.