CIO.com reported “When it comes to AI, the fear of missing out is real. According to research by Coleman Parkes Research on behalf of Riverbed, 91% of decision-makers at large companies are concerned their competitors will have an advantage if they get ahead with AI. So it’s no surprise that every respondent said that when it comes to gen AI, they’ll either be using it, testing it, or planning projects with it over the next 18 months.” The October 9, 2024 article entitled ” Weighing the risks of moving too fast with gen AI” (https://tinyurl.com/2r9aaes2) included these comments about “Taking the time for groundwork”:

Besides public embarrassment, loss of customers or employees, or legal and compliance liabilities, there are also other, more technical risks of moving too fast with gen AI.

For example, companies that don’t do proper groundwork before rolling AI out might not have the right data foundation or proper guardrails in place, or they might move too quickly to put all their faith in a single vendor.

“There’s a lot of risk that organizations will lock themselves into with a multi-year spend or commitment, and it’ll turn out in a year or two that there’s a cheaper and better way to do things,” says David Guarrera, generative AI lead at EY Americas. And there are organizations that jump into AI without thinking about their enterprise-wide technology strategy.

“What’s happening in many places is that organizations are spinning up tens or hundreds of prototypes,” he says. “They might have a contract analyzer made by the tech shop, and a separate contract analyzer made by the CFO’s office, and they might not even know about each other. We might have a plethora of prototypes being spun up with nowhere to go and so they die.”

Then there’s the issue of wasted money. “Say an organization has FOMO and buys a bunch of GPUs without asking if they’re really needed,” he says. “There’s a risk that investing here might take away from what you actually need in the data space. Maybe what you actually need is more data governance or data cleaning.”

The rush to launch pilots and make hasty spending decisions is driven by everyone panicking and wanting to get on top of gen AI as quickly as possible. “But there are ways to approach this technology to minimize the regrets going forward,” he adds.

What do you think?

First published at https://www.vogelitlaw.com/blog/what-are-you-doing-to-weigh-the-risks-of-genai

SCworld.com reported that “As Al continues to transform how cybersecurity services are delivered, it’s crucial to choose a security service provider that leverages Al responsibly and effectively. The following checklist will help you assess a provider’s Al capabilities, focusing on their ability to integrate Al in ways that improve security outcomes, enhance efficiency and maintain ethical standards.”  The October 4, 2024 report entitled “Evaluating Al-augmented cybersecurity service providers” (https://tinyurl.com/582u8m5t) included these 10 things you need to do:

1. Understand the Al LLM (Large Language Model) and Its Training

2. Examine Data Security Controls Across the Data Lifecycle

3. Evaluate Their Approach to Responsible Al Implementation

4. Assess the Depth of Al Integration in Security Operations

5. Review Customer Interaction Capabilities Powered by Al

6. Confirm Al-Driven Automation for Workflow Efficiency

7. Look for Industry-Specific Al Solutions

8. Request Metrics to Prove Al-Driven Security Outcomes

These 8 comments are good advice!

First published at https://www.vogelitlaw.com/blog/8-important-things-to-do-evaluate-ai-augmented-cybersecurity-providers


PWC.com reported that “Yet despite widespread awareness of the challenges, significant gaps persist. To safeguard their organisations, executives should treat cybersecurity as a standing item on the business agenda, embedding it into every strategic decision and demanding C-suite collaboration.”  The PWC report entitled “Findings from the 2025 Global Digital Trust Insights” (https://www.pwc.com/us/en/services/consulting/cybersecurity-risk-regulatory/library/global-digital-trust-insights.html) included these comments:

PwC’s 2025 Global Digital Trust Insights survey of 4,042 business and tech executives from across 77 countries revealed significant gaps companies must bridge before achieving cyber resilience.

  • Gaps in implementation of cyber resilience: Despite heightened concerns about cyber risk, only 2% of executives say their company has implemented cyber resilience actions across their organisation in all areas surveyed.
  • Gaps in preparedness: Organisations feel least prepared to address the cyber threats they find most concerning, such as cloud-related risks and third-party breaches.
  • Gaps in CISO involvement: Fewer than half of executives say their CISOs are involved to a large extent with strategic planning, board reporting and overseeing tech deployments.
  • Gaps in regulatory compliance confidence: CEOs and CISOs/CSOs have differing levels of confidence in their ability to comply with regulations, particularly regarding AI, resilience and critical infrastructure.
  • Gaps in measuring cyber risk: Although executives acknowledge the importance of measuring cyber risk, fewer than half do so effectively, with only 15% measuring the financial impact of cyber risks to a significant extent.

Obviously very concerning news! What do you think?

First published at https://www.vogelitlaw.com/blog/only-2-of-organizations-have-implemented-adequate-cyber-resilience


BankInfoSecurity.com reported that “The cybersecurity industry is suffering from a stagnant workforce, a growing skills gap and a worldwide shortage of nearly 5 million qualified professionals. Despite increasing demand, many organizations struggle to fill critical roles, hindered by budget constraints and a highly competitive market for specialized skills in areas such as artificial intelligence and cloud security.”  The October 2, 2024 report entitled “How Are We Going to Fill 4.8 Million Cybersecurity Jobs?” (https://www.bankinfosecurity.com/how-are-we-going-to-fill-48-million-cybersecurity-jobs-a-26431) included these comments:

ISC2’s 2024 Cybersecurity Workforce Study found that the number of cybersecurity jobs worldwide – 5.8 million – remained about the same over the past year, while the shortage of workers – 4.8 million – grew by 19%.

“It’s not just about the people available in the market. It’s about the skilling, and I think that’s where the focus needs to be – getting the right skill sets into the right job roles,” said Jon France, CISO at ISC2, a nonprofit organization specializing in training and certifications for cybersecurity professionals.

This not a surprise, but very alarming!

First published at https://www.vogelitlaw.com/blog/can-you-believe-that-there-are-more-than-48-million-unfilled-cybersecurity-jobs


SCMagazine.com reported that ““AI and automation are two of the most powerful tools helping audit, risk and compliance teams close the risk resiliency gap,…and the AuditBoard believe that if cyber has reshaped the enterprise risk assessments and management world, AI is about to push ESG frameworks into overdrive.”  The September 27, 2024 entitled “4 ways AI is transforming audit, risk and compliance” (https://www.scworld.com/feature/4-ways-ai-is-transforming-audit-risk-and-compliance) included these comments from Rich Marcus at Audit &Beyond 2024 conference:

AI and automation are reshaping audit, risk, and compliance workflows, especially in cybersecurity, by boosting efficiency and accuracy. These tools help bridge the gap between fast-evolving threats, regulatory demands, and limited resources. AI enables real-time risk sharing, automates the culling of evidentiary data, and streamlines framework stress testing, allowing teams to conduct more frequent assessments with a more accurate analysis.

This process not only sharpens cybersecurity defenses, but makes it easier for companies to juggle new regulations like the SEC’s cybersecurity disclosure rules.

Marcus suggested the whole of these complimentary technologies is greater than the sum of its parts. By automating labor-intensive tasks like evidence collection, control testing, and risk reporting it allows for real-time risk management.

This transformation frees up compliance teams to focus on strategic decision-making and responding proactively to evolving threats, he said.

What do you think?

First published at https://www.vogelitlaw.com/blog/anyone-surprised-that-ai-is-transforming-audit-risk-and-compliance


Computerworld.com reported that “The first body cams were primitive. They were enormous, had narrow, 68-degree fields of view, had only 16GB of internal storage, and had batteries that lasted only four hours. Body cams now usually have high-resolution sensors, GPS, infrared for low-light conditions, and fast charging. They can be automatically activated through Bluetooth sensors, weapon release, or sirens. They use backend management systems to store, analyze, and share video footage.”  The September 27, 2024 article entitled “What happens when everybody winds up wearing ‘AI body cams’?” (https://www.computerworld.com/article/3537041/what-happens-when-everybody-winds-up-wearing-ai-body-cams.html) included the following comments:

Body cams have become ubiquitous in US law enforcement, with all police departments serving populations of more than 1 million implementing them by 2020. Nationwide, 79% of officers work in departments that use body cams.

It’s designed to solve the problem of sifting through thousands of hours of footage to extract actionable information, with vastly more advanced versions coming soon to body cams for police and across all sectors. 

As the use of AI body cams grows to include all police departments, security personnel, and large numbers of employees across many industries, the public will also be getting AI body cams.

I’ve written in the past about the mainstreaming of AI glasses with cameras for multimodal AI. Remember Google’s Project Astra demo from Google I/O 2024? In that video, a researcher picked up a pair of AI glasses running Google Gemini and conversed with the AI about what they both were looking at. 

What do you think about AI Body cam?

First published at https://www.vogelitlaw.com/blog/are-we-ready-for-ai-body-cams


CIO.com reported that “Salesforce today released Agentforce, a new suite of low-code tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.”  The September 12, 2024 article  entitled “Salesforce unveils Agentforce to help create autonomous AI bots” (https://www.cio.com/article/3518646/salesforce-unveils-agentforce-to-help-create-autonomous-ai-bots.html) included these comments:

Agentforce, which has been in pilot phase for the past six months, combines three major Salesforce tools — Agent Builder, Model Builder, and Prompt Builder — to provide the necessary software development infrastructure to create these autonomous agents, according to the company.

Unlike chatbots, AI agents created via Agentforce will be capable of taking actions on their own, Salesforce claimed. The autonomous nature of such agents is a central facet of “agentic AI,” a rising enterprise strategy for transforming business processes by automating specific functions within those processes, without human intervention.

Here are some details about “Salesforce’s journey to Agentforce”:

Salesforce previously enabled actions in conversational bots powered by large language models (LLMs) when it introduced Actions inside its Einstein Copilot in April this year.

Called “Copilot Actions” when released, these were a library of preprogrammed capabilities to help sellers benefit from conversational AI in Sales Cloud.

A top Salesforce executive explained to CIO.com that these “Actions” were basically workflows that could be built inside the Copilot via the Einstein 1 Studio set of low-code tools for creating, customizing, and embedding AI models in Salesforce workflows.

What do you think about AgentForce?

First published https://www.vogelitlaw.com/blog/is-it-good-or-bad-that-salesforce-is-launching-an-ai-tool-to-create-autonomous-ai-bots


CIO.com reported “The added risks of shadow generative AI are specific and tangible and can threaten organizations’ integrity and security. Unmonitored AI tools can lead to decisions or actions that undermine regulatory and corporate compliance measures, particularly in sectors where data handling and processing are tightly regulated, such as finance and healthcare.” The September 12, 2024 article entitled “3 steps to eliminate shadow AI” (https://www.cio.com/article/3512828/3-steps-to-eliminate-shadow-ai.html?utm_campaign=editorial&utm_medium=cio&utm_source=browseralert) included these comments about  “How C-suite executives can bridge the chasm”:

With “78% of AI users bringing their own AI tools to work,” a growing chasm exists between what employees want and what IT and AI teams can safely provide. A study of 700 IT and data decision-makers sponsored by Iron Mountain indicates that 36% rank “protecting and managing the data and other assets created by generative AI” among the top challenges they face. “Creating and enforcing generative AI policies” closely follows at 35%. Following are three recommendations for encouraging innovation while maintaining security, compliance, ethics, and governance standards.

Interesting times with Shadow AI!

First published at https://www.vogelitlaw.com/blog/advice-about-eliminating-shadow-ai


SCMagazine.com reported that “Hackers are using cloud service attacks as a way to go after big-money targets in the insurance and financial industries.” The September 11, 2024 article entitled “Hackers use cloud services to target financial and insurance firms” (https://tinyurl.com/ysr2z33d) included these comments:

The most common targets in the attacks are companies that work in the extremely lucrative financial and insurance sectors, suggesting the hacking crew is looking for a few big payouts before shutting down the operation.

The move is believed to be something of a departure from Scattered Spider’s usual tactics.

“Scattered Spider frequently uses phone-based social-engineering techniques like voice phishing (vishing) and text message phishing (smishing) to deceive and manipulate targets, mainly targeting IT service desks and identity administrators,” explained researcher Arda Büyükkaya.

“The actor often impersonates employees to gain trust and access, manipulate MFA settings, and direct victims to fake login portals.”

The researchers found the attackers using a number of methods for obtaining access to the cloud services. Among the most notable methods was searching services like GitHub to find cloud access tokens which had been accidentally left in source code by developers, which has become a growing problem for many companies.

Are you surprised?

First published at https://www.vogelitlaw.com/blog/nbspare-you-surprised-that-cloud-services-are-cyber-targets


Computerworld.com reported that “While AI will reduce or eliminate altogether the need for human input in some areas, it will also enhance productivity, requiring professionals to reskill and adapt to more strategic and creative roles,…”  The September 9, 2024 article entitled “Will genAI kill the help desk and other IT jobs?” (https://tinyurl.com/27m2tpkw) included these comments:

Along those same lines, a survey of CFOs in June by Duke University and the Atlanta and Richmond Federal Reserve banks found that 32% of organizations plan to use AI in the next year to complete tasks once done by humans. And in the first six months of 2024, nearly 60% of companies (and 84% of large companies) said they had deployed software, equipment, or technology to automate tasks previously done by employees, the survey found.

Organizations are using AI to automate a wide range of business process, including paying suppliers, invoicing, procurement, financial reporting, and optimizing facilities utilization, according to Duke University finance professor John Graham, academic director of the survey. “This is on top of companies using ChatGPT to generate creative ideas and to draft job descriptions, contracts, marketing plans, and press releases,”

What do you think about this news?

First reported at https://www.vogelitlaw.com/blog/looks-like-genai-may-kill-the-it-help-desk-and-other-it-jobs