MSSPAlert.com reported that “While the Cybersecurity & Infrastructure Security Agency (CISA) has come out against paying ransoms, the director of the organization stopped short of saying that the government should ban such payments.” The July 9, 2024 article entitled “CISA Advises Against Paying Ransom, But Rules Out a Ban” (https://tinyurl.com/bded2hsy) included these comments:

CISA Director Jen Easterly recently made her position on ransomware payments known at the Oxford Cyber Forum, as reported by Security Intelligence. However, Easterly didn’t go so far as calling for a ban on paying ransomware demands.

“I think within our system in the U.S. — just from a practical perspective — I don’t see it happening,” she said.

Backing up that assertion, the Ransomware Task Force for the Institute for Security and Technology does not support a ban on paying ransom, according to Security Intelligence. The task force reasoned that small businesses typically cannot withstand a lengthy business disruption and might go out of business after a ransomware attack, and this could disrupt the wider response to ransomware threats.

What do you think about this?

 First published at https://www.vogelitlaw.com/blog/will-there-be-a-legal-ban-to-pay-ransomware

SCMagazine.com reported that “Network security controls are no longer reliable or sufficient. They are easily evaded, prone to false positives, and feed a costly ecosystem of alert management and incident response.”  The July 8, 2024 article entitled “Alert overload? There’s a better way to secure your network” (https://tinyurl.com/4jmcb7sz) included these comments:

According to pen testing by Positive Technologies, an external attacker can breach an organization’s network perimeter in 93% of cases. This is unacceptable, and you no longer need to settle for it. For the past six years, the technologists at Trinity Cyber have been working obsessively to invent a new and better way to detect and truly prevent cyber attacks at the perimeter. 

It is now possible to open, fully inspect, and edit full-session network traffic with a capability fast and accurate enough to run inline. Previously thought impossible, this new capability is now the most effective anti-hacking tool in existence. The company calls it Full Content Inspection (FCI), and it is unlike any other security control. It is a new capability—a full-session, parsed content, active sensor that produces better, more reliable and more enduring security results by applying a different form of threat-identification logic enabled by a different kind of engineering. You can use it instead of or in addition to your current network controls. 

With this new approach, detection accuracy jumps through the roof and false positives drop below one percent. It accurately detects and stops every Common Vulnerability and Exposure (CVE) on the Cybersecurity and Infrastructure Security Agency (CISA) Known Exploited Vulnerability (KEV) list, every time. It’s not too good to be true. It’s real, and it works. 

What do you think?

First published https://www.vogelitlaw.com/blog/are-we-at-a-crucial-cyber-alert-overload


Computerworld.com reported that “A new study revealed that employees have real fears about AI’s intrusion into their workplace. Companies can alleviate many of those anxieties by being more transparent around how they plan to use the technology.” The July 2, 2024 report entitled ” Top 5 AI employee fears and how to combat them” included these AI employee FEARS:

#1 Job displacement due to AI that makes their job harder, more complicated, or less interesting

#2 Inaccurate AI that creates incorrect or unfair insights that negatively impact them

#3 Lack of transparency around where, when, and how the organization is using AI, or how it will impact them

#4 Reputational damage that occurs because the organization uses AI irresponsibly

#5 Data insecurity because the implementation of AI solutions puts personal data at risk

These 5 AI employee fears came from Gartner and EY:

Those were two top fears revealed in a recent study by Gartner about the five main concerns workers have over generative AI and AI in general. And those fears are warranted, according to survey data. For example, IDC predicts that by 2027, 40% of current job roles will be redefined or eliminated across Global 2000 organizations adopting genAI.

A remarkable 75% of employees said they are concerned AI will make certain jobs obsolete, and about two-thirds (65%) said they are anxious about AI replacing their job, according to a 2023 survey of 1,000 US workers by professional services firm Ernst & Young (EY). About half (48%) of respondents said they are more concerned about AI today than they were a year ago, and of those, 41% believe it is evolving too quickly, EY’s AI Anxiety in Business Survey report stated.

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report.

What do you think?

First published at https://www.vogelitlaw.com/blog/top-5-ai-employee-fears-are-not-a-surprise


Computerworld.com reported that “California has done a great job, but its policies are not binding outside of its borders. The US is more freewheeling and supportive of business innovation than many other nations. That can be  one of this country’s strengths. But genAI, and AI in general, has the potential to be as destructive as it can be constructive.”  The June 26, 2024 article entitled “AI regulation: While Congress fiddles, California gets it done” (https://tinyurl.com/yckfmx43) included these comments:

 There are about 650 proposed state bills in 47 states and more than 100 federal congressional proposals related to AI, according to Multistate.ai. New York alone is home to 98 bills and California has 55.

When regulations are codified in so many ways by so many sources in so many places, the chance for conflicting directives is high — and the result could stifle business and leave loopholes in protections.

AI’s complexity adds to the confusion as do the numerous aspects of AI that warrant regulation. The list is lengthy, including job protection, consumer privacy, bias prevention and discrimination, deepfakes, disinformation, election fraud, intellectual property, copyright, housing, biometrics, healthcare, financial services, and national security risks.

So far, the federal government has dragged its feet on AI regulation, seemingly more focused on party politics and infighting than in crafting useful measures. As a result, Congress has not been an effective tool for structuring regulation policy.

The time for congressional action on AI regulation was two or three years ago. But with little being done federally, the states, particularly California, are attempting to fill the breach.

Given California’s leadership in privacy this is not a surprise!

First published at https://www.vogelitlaw.com/blog/big-surprise-california-becomes-the-national-leader-on-ai-regulation


My January 2018 blog was titled “Cybersecurity Software: Kaspersky Lab filed a lawsuit against US government to enjoin federal ban!”  (https://tinyurl.com/3pkhtums)  and now GovInfoSecurity.com is reporting that “Senior executives of Russian cybersecurity firm Kaspersky face new restrictions against doing business in Western countries following an announcement Friday morning by the U.S. Department of the Treasury that it sanctioned 12 of them.” The June 21, 2024 article entitled “US Sanctions 12 Kaspersky Executives” (https://tinyurl.com/2cpdbemw) included these comments:

The sanctions do not include company CEO Eugene Kaspersky or the company itself – which the Biden administration on Thursday banned from doing business inside the United States and effectively from obtaining U.S.-made technology (see: Biden Administration Bans Kaspersky Antivirus Software).

Today’s sanctions encompass the Kaspersky Lab board of directors, the company’s head of research and development, the heads of consumer and corporate businesses and other members of the executive team.

The sanctions prevent blacklisted individuals from conducting business transactions with U.S. financial institutions or individuals.

Treasury has ramped up sanctions pressure on Russia and Russian corporations as the Kremlin war of conquest against European neighbor Ukraine continues well into its second year. Earlier this month, the department prohibited American companies from offering IT support or cloud services for enterprise management software or applications used in design or manufacturing.

No surprise about these sanctions!

First published at https://www.vogelitlaw.com/blog/kaspersky-executives-were-sanctioned-but-old-news-going-back-to-at-least-2018


BankInfoSecurity.com reported that “Critical infrastructure sectors face many potentially disruptive threats such as supply chain vulnerabilities, climate risks and the growing dependency on space-based systems. But the top cyberthreats facing the U.S. are nation-state adversaries in People’s Republic of China and emerging risks associated with artificial intelligence and quantum computing,…” The June 20, 2024 article entitled “DHS Unveils Critical Infrastructure Cybersecurity Guidance” (https://www.bankinfosecurity.com/dhs-unveils-critical-infrastructure-cybersecurity-guidance-a-25584) included this announcement from the Department of Homeland Security:

….new guidance on defending against those risks. He called on sector risk management agencies responsible for overseeing the protection of critical infrastructure in the U.S. to work with owners and operators to develop and implement a foundation of resilience measures. Those measures should include response plans “to quickly recover from all types of shocks and stressors,” while anticipating potential cascading impacts of cyberattacks, according to the guidance document.

We depend on the reliable functioning of our critical infrastructure as a matter of national security, economic security, and public safety, The threats facing our critical infrastructure demand a whole of society response and the priorities set forth in this memo will guide that work.

This is clearly a serious issue for all of us!

First published at https://www.vogelitlaw.com/blog/ai-is-critical-to-a-successful-public-private-cyber-collaboration-in-the-us


BankInfoSecurity.com reported that “Companies are significantly expanding their SEC cyber risk disclosures as they aim to demonstrate their cybersecurity efforts, instill market confidence and potentially improve stock prices, according to Kayne McGladrey, field CISO, Hyperproof.” The June 12, 2024 article entitled ” SEC Cyber Risk Disclosures: What Companies Need to Know” (https://tinyurl.com/3kumdcak) included these comments:

 In this video interview with Information Security Media Group at the Cybersecurity Implications of AI Summit, McGladrey also discussed:

•Why companies should use tools and software to collect and automatically gather evidence of compliance;

•The consequences of false cyber risk disclosures;

•The impact that SEC requirements have on private companies and supply chains.

McGladrey has more than 20 years of leadership experience in companies such as AT&T and Pensar Development. He serves as an advisory board member for several universities and organizations.

Are you ready?

First posted at https://www.vogelitlaw.com/blog/are-you-prepared-to-report-cyber-attack-to-the-sec


BankInfoSecurity.com reported that “Companies are significantly expanding their SEC cyber risk disclosures as they aim to demonstrate their cybersecurity efforts, instill market confidence and potentially improve stock prices, according to Kayne McGladrey, field CISO, Hyperproof.” The June 12, 2024 article entitled ” SEC Cyber Risk Disclosures: What Companies Need to Know” (https://tinyurl.com/3kumdcak) included these comments:

 In this video interview with Information Security Media Group at the Cybersecurity Implications of AI Summit, McGladrey also discussed:

•Why companies should use tools and software to collect and automatically gather evidence of compliance;

•The consequences of false cyber risk disclosures;

•The impact that SEC requirements have on private companies and supply chains.

McGladrey has more than 20 years of leadership experience in companies such as AT&T and Pensar Development. He serves as an advisory board member for several universities and organizations.

Are you ready?


GovInfoSecurity.com reported that “The Cybersecurity and Infrastructure Security Agency heard recommendations for the Joint Cyber Defense Collaborative approved Wednesday by the agency’s Cybersecurity Advisory Committee. The recommendations urge CISA to deepen the JCDC’s focus on operational collaboration and clarify key operational components, such as criteria for membership and participation in physical spaces.” The June 7, 2024 article entitled ” CISA Planning JCDC Overhaul as Experts Criticize Slow Start” (https://tinyurl.com/5ma3st67) included these comments:

CISA has accepted or partially accepted virtually all of the advisory committee’s last set of recommendations. CISA’s established the committee in June 2021 to provide strategic advice; it consists of 23 leading cybersecurity, technology and risk management experts. The agency did not immediately respond to a request for comment as to whether it plans to adopt the JCDC recommendations.

CISA has struggled to clearly detail who can be a member of the JCDC – and what being a JCDC partner even means – from its very inception, said Ari Schwartz, coordinator of the Center for Cybersecurity Policy and Law and former senior director of cybersecurity for the National Security Council.

This sounds like good news, what do you think?

First published https://www.vogelitlaw.com/blog/have-you-heard-that-the-joint-cyber-defense-collaborative-needs-improvement


GovInfoSecurity.com reported that the “U.S. Department of Justice and the Federal Trade Commission are set to spearhead antitrust investigations into Microsoft, OpenAI and Nvidia that could potentially reshape the burgeoning commercial artificial intelligence industry.” The June 6, 2024 article entitled “US Regulators Intensify Antitrust Scrutiny of AI Developers” (https://tinyurl.com/38mbw7pz) included these comments:

 The probes were first reported Thursday after U.S. regulators seemingly reached an agreement on how to move forward with investigating Microsoft’s $13 billion investment into the ChatGPT maker, as well as whether Nvidia violated antitrust laws in its development of next-generation AI chips.

The FTC will lead the investigation into OpenAI and Microsoft, which first invested $1 billion in the AI startup in 2019 and has since assumed a 49% stake in the company. The Justice Department will oversee a separate probe into Nvidia, which the industry increasingly relies on for high-performance semiconductors to power AI products, according to The New York Times.

Neither the FTC nor Justice immediately responded to requests for comment. Nvidia, Microsoft and OpenAI were silent about the news on Thursday, though all three companies have previously confirmed their AI operations were the subject of regulatory scrutiny in other regions, including the European Union.

The news comes after the FTC previously opened an inquiry into leading AI startups such as OpenAI, Amazon, Google and Microsoft in January as part of a broader effort to determine whether tech giants were exerting undue influence over the emerging technology industry. FTC Chair Lina Khan said at the time that the commission would examine “the investments and partnerships being formed between AI developers and major cloud service providers.”

What do you think?

First published at https://www.vogelitlaw.com/blog/what-do-you-about-think-about-ai-antitrust-investigations