reported that “Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday.”  The April 15, 2024 article entitled ” Microsoft’s ‘AI Watchdog’ defends against new LLM jailbreak method” ( included these comments:

 Microsoft first revealed the “Crescendo” LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI’s ChatGPT, Google’s Gemini, Meta’s LlaMA or Anthropic’s Claude, to produce an output that would normally be filtered and refused by the LLM model.

For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM’s previous outputs, follow up with questions about how they were made in the past.

The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called “Crescendomation,” which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants.

Looks great, what do you think?

First published at reported that “A Department of Health and Human Services division that administers funding, training and other services to children and families is putting sensitive data at high risk because of gaps in cloud security controls and practices, according to a watchdog agency report.”  The April 2, 2024 article entitled ” Poor Cloud Controls at HHS Put Families, Children at Risk” ( included these comments:

 A Department of Health and Human Services division that administers funding, training and other services to children and families is putting sensitive data at high risk because of gaps in cloud security controls and practices, according to a watchdog agency report.

The HHS Office of Inspector General report released Monday says a 2022 audit and penetration testing of cybersecurity systems for HHS’ Administration for Children and Families found several deficiencies, including failing to accurately identify and inventory all of the division’s cloud computing assets.

This is terrible news! What do you think? reported that “A $22 million ransom payment allegedly made by Optum, which is supported by blockchain transaction records associated with ALPHV/BlackCat, was apparently stolen by the ransomware-as-a-service (RaaS) in an exit scam.”  The April 8, 2024 reported entitled “Change Healthcare breach data may be in hands of new ransomware group“ ( included this information:

The Change Healthcare breach story has taken on a new twist, with emerging ransomware group RansomHub claiming Monday it has 4TB of data stolen from the healthcare tech company in February.

The Change Healthcare platform, which is owned by UnitedHealth Group subsidiary Optum, was breached by an affiliate of the ALPHV/BlackCat ransomware group in February, causing widespread operational outages and threatening the leak of sensitive patient and client data.

The group reportedly published a fake law enforcement takedown notice on their leak site before disappearing with the full $22 million, leaving the affiliate who performed the breach, known as “notchy,” empty-handed.

I guess you can’t trust thieves!! reported that “There’s a lot going on inside the minds of small and medium-sized business (SMB) owners….. Increasingly, those opportunities exist in the cloud, whether it’s gaining new insights from data, effortlessly scaling to meet demand, or enabling collaboration from anywhere. But when it comes to cloud security,…”  The March 29, 2024 article entitled ” Three cloud security misconceptions that hold SMBs back” ( included these three cloud security misconceptions:

 Misconception #1: Security costs too much money – and it’s not a priority initiative.

A recent study by AWS revealed that 35% of SMBs do not consider security a high-priority initiative. Sophisticated cyberthreats are not just a concern for enterprises, and cloud security gives access to the same infrastructure and tools used by organizations with the highest security needs—think healthcare, finance, and defense applications in the government. Beyond protection, sustaining revenue, earning customer trust, and maintaining a clear pathway to growth are all easier when companies invest in security.

Businesses of all sizes that use cloud security inherit all the security, controls, and certifications of their chosen provider’s infrastructure. So, SMB owners can meet their unique security and compliancy requirements at scale—all while only paying for the resources they actually use. It’s also possible to bypass all the expenses associated with maintaining physical infrastructure, helping owners reinvest in other areas that drive savings and growth.

Misconception #2: Cloud apps are inherently less secure than on-premises.

About 50% of SMBs express concern about cloud security and migration. I understand. Many small companies are satisfied with on-premises, so why make a change? It’s a fair question. Remember that familiarity doesn’t equal safety. And cloud security offers flexibility and scalability that on-premises infrastructure cannot.

When companies store data in the cloud, the provider has responsibility for safeguarding the infrastructure. This lets the SMB owner focus on what what’s actually being stored. And the right cloud partners can help select the services needed for that. If the company needs more computing capacity, it can access it without needing to purchase and maintain physical infrastructure, while the provider helps the organization stay compliant with industry regulations.

Misconception #3: Companies need a large IT team and extensive resources to maintain strong security.

Forty percent of SMBs report skill gaps as a barrier preventing them from investing in security, though 41% have yet to offer any security training to their staff. It’s likely another example of familiarity with on-premises infrastructure working against an organization. Managing security on-premises often involves more complexity and it’s more time-consuming than the cloud. But cloud security doesn’t require more budget for a company to succeed.

What do you think?

First published at reported “An active attack targeting a vulnerability in Ray, a widely used open-source AI framework, has impacted thousands of companies and servers running AI infrastructure — computing resources that were exposed to the attack through a critical vulnerability that’s under dispute and has no patch.”  The March 26, 2024 article entitled “Flaw in Ray AI framework potentially leaks sensitive data of workload” ( included these comments:

Oligo researchers said in a March 26 blog post that the bug lets attackers take over a company’s computing power and leak sensitive data. The flaw — CVE-2023-48022 — has been under active exploitation for the last seven months, affecting sectors such as education, cryptocurrency, and medical and video analytics companies.  

Here’s how the situation developed: Late last year, the researchers said five unique vulnerabilities in Ray were disclosed to unified compute platform Anyscale, the developers and Ray maintainers. The vulnerabilities were disclosed by Bishop Fox, Bryce Bearchell and Protect AI.

Following the disclosure, Anyscale posted a blog that addressed the vulnerabilities, clarified the chain of events, and detailed how each CVE was addressed. While four of the reported vulnerabilities were fixed in Ray version 2.8.1, the fifth CVE (CVE-2023-48022) remains disputed, meaning that it was not considered a risk and was not addressed with a patch.

How do you feel about Open Source AI? reported that “Digital success requires a product-based approach to IT — and a shift to persistent rather than per-project funding. Here’s how to address your CFO’s concerns about costs and risks.  CFOs want certainty when it comes to spend. And they want to know exactly how much return on investment (ROI) can be expected when IT leaders make technology-related changes.” The March 25, 2024 article entitled “How to get your CFO to buy into a better model for IT funding” ( included these comments:

Modern digital organisations tend to use an agile approach to delivery, with cross-functional teams, product-based operating models, and persistent funding. In contrast, traditional organisations use a project-based approach to delivery, with temporary teams created on an as-needed basis for a specific purpose with budgets based on up-front funding estimates.

CFOs have grown comfortable with the traditional project-based approach, through which they believe they get a better handle on spend certainty and a better sense of ROI. But to deliver transformative initiatives, CIOs need to embrace the agile, product-based approach, and that means convincing the CFO to switch to a persistent funding model.

Persistent funding, also known as perpetual funding, provides IT teams consistent funding on an annual rather than per-project basis. It empowers them to better consider long-term impact as well, enabling them to tackle technical debt and improve IT processes as necessary — activities often not addressed by project-based funding unless proposed separately.

What do you think about CIOs working with CFOs?

First published at reported that “Artificial intelligence technologies such as generative AI are not helping fraudsters create new and innovative types of scams. They are doing just fine relying on the traditional scams, but the advent of AI is helping them scale up attacks and snare more victims, according to fraud researchers at Visa.”  The March 21, 2024 article entitled “AI Is Making Payment Fraud Better, Faster and Easier” ( included these comments from Paul Fabara (Chief Risk and Client Services officer at Visa):

 Organized threat actors continue to target the most vulnerable point in the payments ecosystem – humans. And they’re using AI to make their scams more convincing, leading to “unprecedented losses” for victims,

Also these comments were in the article:

Fraudsters can use AI to automate the process of identifying vulnerabilities in a system to make it easier for threat actors to launch targeted attacks, carry out large-scale social engineering attacks and generate convincing phishing emails on a massive scale by analyzing and mimicking human behavior. Generative AI tools also can generate realistic speech capable of mimicking human emotions and logic, which threat actors can exploit to impersonate financial institutions and obtain one-time passwords or execute phishing campaigns to steal payment account credentials.

AI deepfakes are a growing concern. Criminals recently used a deepfake video to impersonate company executives and trick an employee into transferring $25.6 million to several accounts held by the group. Researchers say hackers need just three seconds of audio sample to clone a voice using AI in 10 minutes. A month after that research became public, a Vice reporter demonstrated how an unauthorized person used a cloned voice to access a consumer’s bank account.

What can we do to help things get better, and not worse?

Frist published at reported that “Years into strategies centered on adopting cloud point solutions, CIOs increasingly find themselves facing a bill past due: rationalizing, managing, and integrating an ever-expanding lineup of SaaS offerings — many of which they themselves didn’t bring into the organization’s cloud estate.” The March 15, 2024 article entitled “CIOs take aim at SaaS sprawl”

( included these comments: 

Salesforce, Workday, Atlassian, Oracle, Microsoft, GitHub, and ServiceNow are but a few of the many vendors whose cloud applications make up the new tech backbone for most enterprises today, in conjunction with customized in-house apps and niche offerings across public clouds. 

For many IT leaders, mergers and decentralization on top of cloud migration strategies are significant contributors to SaaS management headaches, resulting in increased complexity and redundancies that can be challenging to uncover.

But the article does not address the important legal issues in the SaaS world!

First published at reported that “Dallas-based UT Southwestern Medical Center had data from almost 2,100 individuals compromised following a data breach, The Dallas Morning News reports.”

The March 12, 2024 report entitled “UT Southwestern breach hits over 2K patients” ( included these comments a UT Southwestern spokesperson:

We are assessing the data to prepare notifications to those impacted in accordance with federal regulations. The incident involved internal use of unapproved software and did not involve a cyberattack or external exposure of data,…

And these comments:

Threat actors were able to access patients’ medical and health insurance details, as well as their birthdates and addresses, noted UT Southwestern in a filing with the Office of the Texas Attorney General, which also noted upcoming notifications to affected individuals.

Such a breach comes months after UT Southwestern disclosed being among the more than 2,700 organizations impacted by the widespread MOVEit hack conducted by the Cl0p ransomware operation. The development also follows an IBM study noting mounting data breach costs, especially in the healthcare sector, which logged an over 50% increase in average breach spending since 2020 to $11 million last year.

Unfortunately Healthcare is a large target!

First published reported that “More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections. The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.” The March 5, 2024 article entitled “Researchers, legal experts want AI firms to open up for safety checks.” included these comments:

The letter, and a study behind it, was created with the help of nearly two dozen professors and researchers who called for a legal “safe harbor” for independent evaluation of genAI products.

The letter was sent to companies including OpenAI, Anthropic, Google, Meta, and Midjourney, and asks them to allow researchers to investigate their products to ensure consumers are protected from bias, alleged copyright infringement, and non-consensual intimate imagery.

How do you think this GenAI investigation will go?

First published at