DHS issued a report that it “…has a longstanding commitment to responsible use of Artificial Intelligence (AI) technologies that we employ to advance our missions, including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. We are building on that commitment by publishing an updated AI Use Case Inventory and demonstrating how we are exceeding government-wide standards on transparency, accountability, and responsible use.”  The December 16, 2024 release entltled “AI at DHS: A Deep Dive into our Use Case Inventory” (https://www.dhs.gov/news/2024/12/16/ai-dhs-deep-dive-our-use-case-inventory) included these comments:

DHS will continue to update the DHS AI Use Case Inventory on a rolling basis and complete at least one full update annually. We will continue to implement and monitor compliance with the M-24-10 minimum practices for every safety- and/or rights-impacting use case before and during its deployment. DHS will conduct ongoing monitoring, testing, and evaluation to make sure that we are living up to our commitments to use AI in safe, responsible, and trustworthy ways. 

What do you think?

First published https://www.vogelitlaw.com/blog/good-thing-that-the-department-of-homeland-security-is-studying-a1

Darkreading.com reported that “Today, however, many in security are simply “professionals” who found a well-paying job but lack that hacker spirit. They’re not driven by a love of the challenge or a hunger to learn. They may take the occasional course or learn a few technical tricks — but often, they’re doing the bare minimum. This leads to weak security. Meanwhile, attackers? They still have that old-school hacker passion, constantly learning and evolving for the love of the challenge.” The December 12, 2024 article entitled ” Cultivating a Hacker Mindset in Cybersecurity Defense” (https://www.darkreading.com/cyberattacks-data-breaches/cultivating-hacker-mindset-cybersecurity-defense) included these comments:

Too many defenders get stuck on the “how” of an attack — the technical exploits, tools, and vulnerabilities — but to stay ahead, we need to ask “why.” Attackers aren’t just pushing buttons; they’re making strategic decisions, choosing the path of least resistance and maximum gain specific to their objectives.

Attackers know defenders are predictable. They know defenders — often too focused on what looks scary instead of what’s actually vulnerable — will patch the big vulnerabilities while ignoring the misconfigurations or overly trusted third-party integrations. Red teams might overlook these, but real adversaries know they’re prime opportunities. Attackers exploit trusted integrations to move laterally or exfiltrate data without triggering alarms. This is why understanding the “why” behind attacks is crucial. Attackers aren’t just targeting technology — they’re going after the path of least resistance, and too often, that’s where we’re late.

What do you think?

First published at https://www.vogelitlaw.com/blog/think-like-a-hacker-for-the-best-cybersecurity-defense


Networkworld.com reported that “IBM researchers are developing optical chips that are expected to significantly speed connectivity in data center systems, enabling better performance and lower energy costs for customers that need to build and support AI applications.” The December 9, 2024 article entitled “IBM optical chip prototype aims to speed AI computing and slash data center energy demands” (https://www.networkworld.com/article/3619912/ibm-optical-chip-prototype-aims-to-speed-ai-computing-and-slash-data-center-energy-demands.html) included these comments:

IBM said its co-packaged optics (CPO) prototype will redefine the way the computing industry transmits high-bandwidth data between chips, circuit boards, accelerators, servers and other devices by using a polymer material to direct optics rather than traditional glass-based fiber optics. 

Such so-called polymer waveguides promise lower costs and better flexibility and are less susceptible to interference than their glass counterparts, IBM said. There are a number of polymer fiber uses in the telecom arena, but few have been applied to the data center, IBM stated.

The CPO prototype module will offer high-bandwidth-density optical structures, coupled with the ability to transmit multiple wavelengths per optical channel, boosting bandwidth between chips as much as 80 times compared to traditional electrical connections, said Mukesh Khare, general manager of IBM’s semiconductors division and vice president of hybrid cloud research at IBM Research.

Sounds great for the IT community!

First published at https://www.vogelitlaw.com/blog/ibm-is-developing-new-ai-chips-with-co-packaged-optics-cpo


Computerworld.com reported that “Though the technology will likely lead to new jobs, they may not benefit those who lost work due to automation.” The December 4, 2024 article entitled “OECD: GenAI is affecting jobs previously thought safe from automation” (https://www.computerworld.com/article/3617281/oecd-genai-is-affecting-jobs-previously-thought-safe-from-automation.html) included these comments:

According to a new report from the Organization for Economic Co-operation and Development (OECD), generative AI (genAI) will soon affect work areas previously considered to have a low likelihood of automation, according to The Register.

Automation in the past mainly affected industrial jobs in rural areas. GenAI, on the other hand, can be used for non-routine cognitive tasks, which is expected to affect more highly skilled workers and big cities where these workers are often based. The report estimates that up to 70% of these workers will be able to get half of their tasks done twice as fast with the help of genAI. The industries likely to be affected include education, IT, and finance.

The OECD notes that even if work tasks disappear, unemployment won’t necessarily increase. The overall number of jobs could increase, but those new positions might not directly benefit those who lost work because of automation and new efficiencies.

Are you surprised?

First published at https://www.vogelitlaw.com/blog/genai-is-replacing-automation-jobs-previously-thought-safe


InfoWorld.com reported that “Even as Meta touts its Llama model, the company is incorporating OpenAI’s GPT-4 to enhance internal tools and philanthropic ventures. Mark Zuckerberg has consistently championed Meta’s Llama AI model as a leader in generative AI technology, positioning it as a strong competitor to OpenAI and Google. However, behind the scenes, Meta is complementing Llama with a rival AI model to meet its internal needs.” The December 4, 2024 article entitled “Meta quietly leans on rival GPT-4 despite Zuckerberg’s bold Llama claims” (https://www.infoworld.com/article/3617048/meta-quietly-leans-on-rival-gpt-4-despite-zuckerbergs-bold-llama-claims.html) included these comments:

The dual reliance on Llama and OpenAI raises questions about Meta’s broader AI ambitions. Zuckerberg has positioned Llama as a key player in what he calls the “model wars,” touting its open-source framework as a competitive advantage.

When Llama’s latest version was released mid-year, Zuckerberg stated it was “competitive with the most advanced models and leading in some areas.” He claimed Llama would be “the most advanced in the industry by next year.”

However, the integration of GPT-4 into key Meta tools suggests that Llama, while powerful, still has limitations, particularly in addressing diverse queries and providing robust support across various use cases.

What do you think?

First published at https://www.vogelitlaw.com/blog/no-big-surprise-that-gpt-4-and-llama-are-competing-big-time


CIO.com reported that “Artificial intelligence continues to dominate technology discussions, as executives — and workers at all levels — seek ways to use AI to make work easier, faster, and ultimately more profitable.”  The December 2, 2024 article entitled “15 most underhyped technologies in IT” (https://www.cio.com/article/1246992/6-most-underhyped-technologies-in-it-plus-one-thats-not-dead-yet.html ) include the comments about #13 Cloud computing

Go back 15 years when cloud was the tech generating all the buzz, and analysts were trying to separate reality from the hype.

Today the model doesn’t seem like such a marvel, but when you think about it, cloud still deserves a lot of praise.

“It has been one of the most enabling technology shifts we’ve ever had, and because of the move to cloud, it enables us to do everything else we’re doing now. But it has gone completely to the background, because AI has sucked up all the air,” says Mark Taylor, CEO of the Society for Information Management (SIM).

Here are all 15 underhyped technologies in IT:

1. Small language models/small AI

2. AMRs and co-bots

3. IoT security

4. Zero trust edge

5. Quantum-safe technologies

6. Privacy-enhancing technologies (PET)

7. Decentralized digital identity (DDID)

8. Modern data platforms

9. Data management software

10. Synthetic data

11. Spatial computing

12. IT management software

13. Cloud computing

14. Cloud-based ERPs

15. Cloud migration tools

What do you think about these 15 underhyped technologies in IT?

First published at https://www.vogelitlaw.com/blog/cloud-computing-is-big-on-the-list-of-15-most-underhyped-technologies-in-it


CIO.com reported “Certifications have long been a great means for IT career advancement. The right credentials can boost your salary, set you apart from the competition, and help you land promotions in your current role. In fact, IT leaders report that certified staff add a value of $30,000 per year to the organization, with a noticeable increase in productivity from employees earning certifications, according to Skillsoft. In the coming year, certifications centered around cybersecurity and cloud are some of the highest-paying and most sought-after certifications available.” The November 6, 2024 report entitled “The 20 most valuable IT certifications today” (https://tinyurl.com/mry5nx3v) included these 20 IT certifications:

  1. AWS Certified Security – Specialty
  2. Google Cloud – Professional Cloud Architect
  3. Nutanix Certified Professional – Multicloud Infrastructure (NCP-MCI) v6.5
  4. Certified Cloud Security Professional averages (CCSP)
  5. Cisco Certified Network Professional (CCNP) – Security
  6. Certified Information Systems Security Professional (CISSP)
  7. Cisco Certified Internetwork Expert (CCIE) Enterprise Infrastructure
  8. Certified in Risk and Information Systems Control (CRISC)
  9. AWS Certified Developer – Associate
  10. Certified Information Privacy Professional (CIPP)
  11. Microsoft 365 Certified: Administrator Expert
  12. Certified Information Security Manager (CISM)
  13. Certified Information Privacy Manager (CIPM)
  14. AWS Certified Solutions Architect – Associate
  15. Certified Information Systems Auditor (CISA)
  16. Certified in the Governance of Enterprise IT (CGEIT)
  17. Microsoft Certified: Azure Administrator Associate
  18. Google Cloud – Associate Cloud Engineer
  19. Certified Ethical Hacker (CEH)
  20. Certified Data Privacy Solutions Engineer (CDPSE)

How does this impact you? And what do you think?

First published at https://www.vogelitlaw.com/blog/do-you-know-the-20-most-valuable-it-certifications


CIO.com reported “Transformational CIOs recognize the importance of IT culture in delivering innovation, accelerating business impacts, and reducing operational and security risks. Without a strong IT culture, inspiring IT teams to extend beyond their “run the business” responsibilities into areas requiring collaboration between business colleagues, data scientists, and partners is challenging.”  The November 12, 2024 article entitled “10 ways to kill your IT culture” (https://tinyurl.com/4zt5wstj) included the comments about “#2 Asking for IT’s opinions and ignoring their feedback”:

Even when the CIO isn’t micromanaging, IT employees can easily sense when leaders aren’t listening to their feedback and suggestions.

“One of my golden rules is, ‘Your opinion matters,’ but if you want to demotivate and demoralize your team, just ask for input and then consistently ignore it,” says Joe Puglisi, growth strategist and fractional CIO at 10Xnewco. “Before long, you will have a silent and frustrated group of employees.”

CIOs can avoid this culture-killing behavior by responding to opinions and feedback, even if it’s not immediate. Top leaders repeat what they heard and capture feedback in a management tool. This gives leaders time to review options, illustrate when feedback generates changes, and demonstrate that leaders care about people’s opinions.

Here are all 10 ways:

  1. Resorting to micromanagement or command-and-control
  2. Asking for IT’s opinions and ignoring their feedback
  3. Crushing hybrid work and work-life balance
  4. Lacking a pragmatic vision that inspires change
  5. Overcommitting and leaving teams defenseless
  6. Promoting agile culture without stakeholder adoption
  7. Failing to communicate pivots and strategic changes
  8. Amplifying technical jargon undermines credibility
  9. Accepting team and individual underperformance
  10. Finger-pointing mistakes and taking all the credit

What do you think?

First published at https://www.vogelitlaw.com/blog/be-careful-to-avoid-any-of-these-10-ways-to-kill-your-it-culture


SCWorld.com reported that “A conflict has unfolded within the security operations center (SOC). For decades, security teams have balanced their financial needs and security needs to determine which data they should use and maintain to secure their organizations. However, as data volumes and storage costs continue to soar, this imperfect approach has led to one of the SOC’s biggest challenges: the data paradox.”  The November 4, 2024 report entitled ” Why SOCs need to break away from legacy SIEMs” (https://tinyurl.com/3jtxj2wf) included these comments:

One culprit responsible for the data paradox are the security information and event management (SIEM) tools that were originally designed two decades ago to centralize data from disparate tools so teams could use it to secure their businesses. However, these SIEM tools were built for a time when log volumes and adversary speed were a fraction of what they are today. They have failed to evolve and scale alongside the exponential growth of data volumes and changing adversary sophistication.

Imagine the team need to investigate an incident, and it wants immediate access to all of the company’s data to gain a full picture of the incident and determine next steps. It’s now unattainable for many SOC teams because ingesting all of the necessary data for a full investigation is too time-consuming and costly when using legacy SIEM tools. SOC teams are forced to make budget-conscious choices on which data to analyze, leading to an incomplete picture, inadequate investigation and response, and insufficient protection against breaches.

A new generation of SIEM (Next-Gen SIEM) has emerged to help security teams scale and ingest every source of data they have without breaking the bank. These cloud-native tools are fundamentally changing how the SOC operates, allowing them to finally break free of the data paradox problem.

Think its about time for the Next-Gen SIEM?

First published at https://www.vogelitlaw.com/blog/aging-legacy-siems-need-to-be-replaced-with-next-gen-siems


SCWorld.com reported that “Artificial intelligence (AI) systems permeate almost every aspect of modern society. These technologies have deep integrations with business information systems that access valuable data such as customer details, financial reports, and healthcare records. AI systems can also access a variety of IT and OT systems such as cloud services, communication tools, IoT devices, and manufacturing processes.”  The October 30, 2024 report entitled “ Five ways to protect AI models” (https://tinyurl.com/6fntew7e) included these 5 way to protect AI models from LLM attacks:

1. Embrace user awareness and education: Ensure that employees are well aware of AI risks and weaknesses. Train them well so they don’t fall victim to phishing attacks or upload sensitive company data into AI models for analysis.

2. Develop an AI usage policy: Define ethical and responsible usage policy for AI within the organization. Offer clear instructions on what the company permits and does not permit. Identify and communicate risks associated with AI, such as data privacy, bias and misuse.

3. Leverage AI model and infrastructure security: Deploy advanced security tools to protect the AI infrastructure from DDoS and other cybersecurity threats. Use zero-trust principles and strict access controls. Limit access to AI models to specific privileged users.

4. Validate and sanitize inputs: Validate and sanitize all inputs must before they are passed to the LLM for processing. This step ensures protection against all major prompt injection attacks, ensuring that the model has been fed clean data.

5. Practice anonymization and minimization of data: Use masking or encryption techniques to anonymize data while training AI models. Minimize data use by only using data necessary for the company’s specific application.

Good advice, what do you think?

First published at https://www.vogelitlaw.com/blog/nbspyou-need-to-consider-these-5-ways-to-protect-ai-models