Why patching SLAs should be the floor, not the strategy

Source: CISOs step into the AI spotlight | CSO Online

Author: unknown

URL: https://www.csoonline.com/article/4169623/why-patching-slas-should-be-the-floor-not-the-strategy.html

ONE SENTENCE SUMMARY:

Patching SLAs create compliance theater by rewarding easy fixes, while true cyber risk persists in hard-to-remediate legacy, architecture, and control gaps.

MAIN POINTS:

  1. CISOs often recite green SLA metrics while significant unresolved vulnerabilities remain.
  2. Quickly closed criticals are typically inexpensive, low-friction remediation tasks.
  3. Difficult issues linger: legacy systems, architectural flaws, identity misconfigurations, and unsupported platforms.
  4. Governance and reporting overemphasize SLA compliance, masking concentrated high-impact exposures.
  5. SLA performance indicates ticketing discipline, not actual security risk reduction.
  6. Fire-drill analogy: repeated success doesn’t prove resilience against unscripted incidents.
  7. Boards can be misled when the riskiest failures live inside the “small” noncompliant percentage.
  8. Expressing cyber risk in dollar terms changes executive prioritization and funding discussions.
  9. Exception processes often become paperwork, letting exposure disappear from dashboards without mitigation.
  10. Meaningful remediation needs capital/opex investment justified by quantified risk reduction.

TAKEAWAYS:

  1. Reframe SLAs as minimum hygiene requirements, not primary vulnerability program success metrics.
  2. Prioritize trending quantified residual risk by business service over raw closure percentages.
  3. Require risk acceptances to include loss exposure, review cadence, and funded remediation plans.
  4. Use attacker-speed evidence (e.g., DBIR, KEV) to challenge long patch timelines for hard changes.
  5. Accept imprecision in CRQ estimates because actionable dollars beat misleading green scorecards.

Your Purple Team Isn’t Purple — It’s Just Red and Blue in the Same Room

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/05/your-purple-team-isnt-purple-its-just.html

ONE SENTENCE SUMMARY:

Autonomous purple teaming uses AI agents to close red-blue validation loops at machine speed, outpacing shrinking exploit windows.

MAIN POINTS:

  1. Night-shift defense suffers from manual handoffs like copying hashes, rewriting scripts, awaiting approvals.
  2. Exploit availability time dropped from 56 days in 2024 to roughly 10 hours.
  3. Defender processes improved to hours, but attacker operations now execute in seconds.
  4. Purple teaming aims to iteratively convert red findings into blue validations continuously.
  5. Traditional execution fails because human coordination introduces meetings, delays, and missed communications.
  6. Tool outputs become artifacts that require reinterpretation, creating fragile “spaghetti” workflows between teams.
  7. Approval and ticketing cycles often exceed exploitation windows, making fixes arrive too late.
  8. AI-assisted adversaries can compromise systems in about 73 seconds, widening operational asymmetry.
  9. Autonomous purple teaming replaces handoffs with auditable agents running end-to-end iterative loops.
  10. Effective autonomy combines automated pentesting, BAS validation, and AI-driven mobilization into one queue.

TAKEAWAYS:

  1. Speed gaps are primarily workflow problems, not analyst competence or tool capability.
  2. Exploit windows now demand validation and remediation cycles measured in minutes, not days.
  3. Operationalizing purple teaming requires eliminating manual knowledge-transfer bottlenecks.
  4. End-to-end autonomous loops must remain transparent, controllable, and reversible for defenders.
  5. Unified action queues based on real exploitability beat CVSS-based prioritization for timely defense.

Why Changing Passwords Doesn’t End an Active Directory Breach

Source: BleepingComputer

Author: Sponsored by Specops Software

URL: https://www.bleepingcomputer.com/news/security/why-changing-passwords-doesnt-end-an-active-directory-breach/

ONE SENTENCE SUMMARY:

Password resets alone may not evict attackers in AD/hybrid Entra ID due to caching, sync delays, tickets, sessions, permissions.

MAIN POINTS:

  1. Changing a password doesn’t instantly invalidate old credentials across all authentication paths.
  2. Windows cached password hashes can allow offline logon using pre-reset credentials.
  3. Hybrid setups add Entra ID synchronization delays where old passwords may still work.
  4. Post-reset states vary depending on device reconnection and successful new logons.
  5. Pass-the-hash attacks reuse captured hashes even after passwords are changed.
  6. Kerberos tickets keep sessions alive without re-entering passwords after resets.
  7. Service accounts’ long-lived, privileged credentials provide resilient attacker fallback access.
  8. Golden and Silver Ticket attacks bypass password checks by forging Kerberos tickets.
  9. ACL abuse and AdminSDHolder modifications can persist privileges despite password changes.
  10. Effective eviction needs session termination, ticket purging, KRBTGT resets, rotations, and directory auditing.

TAKEAWAYS:

  1. Treat password resets as one control within broader incident response, not final remediation.
  2. Reduce reset-gap exposure by forcing sync and updating endpoint cached credentials.
  3. Kick attackers out by terminating sessions and clearing Kerberos tickets on affected systems.
  4. Rotate privileged and service-account credentials to remove reliable persistence mechanisms.
  5. Audit AD changes—memberships, delegated rights, ACLs, privileged roles—to eliminate hidden backdoors.

TeamPCP Compromises Checkmarx Jenkins AST Plugin Weeks After KICS Supply Chain Attack

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/05/teampcp-compromises-checkmarx-jenkins.html

ONE SENTENCE SUMMARY:

Checkmarx confirmed a tampered Jenkins AST plugin publication, linked to TeamPCP, highlighting repeated supply-chain compromises and likely incomplete remediation.

MAIN POINTS:

  1. Checkmarx acknowledged a modified Jenkins AST plugin appeared in the Jenkins Marketplace.
  2. Users were told to keep versions 2.0.13-829.vc72453fa_1c16 or earlier.
  3. Checkmarx released version 2.0.13-848.v76e89de8a_053 on GitHub and Marketplace.
  4. Incident updates still suggested a new plugin version was being published.
  5. The company did not explain how the malicious version reached the Marketplace.
  6. TeamPCP was identified as the attacker targeting Checkmarx again.
  7. Earlier compromises included KICS Docker image, VS Code extensions, and GitHub Actions workflow.
  8. Bitwarden CLI npm package was briefly compromised to distribute credential-stealing malware.
  9. Researchers reported unauthorized access to the plugin’s GitHub repo and defacement/renaming.
  10. SOCRadar inferred unrotated credentials or an undetected foothold enabled rapid re-entry.

TAKEAWAYS:

  1. Verify Jenkins plugin versions immediately and rollback if beyond the known-safe build.
  2. Supply-chain trust is being exploited to distribute credential stealers through developer tooling.
  3. Secret rotation and credential hygiene appear central to preventing repeated intrusions.
  4. Monitor code repositories for defacement, renames, and unauthorized administrative actions.
  5. Treat rapid repeat incidents as evidence of incomplete remediation or persistent access.

Inside AD CS Escalation: Unpacking Advanced Misuse Techniques and Tools

Source: Unit 42

Author: Stav Setty, Tom Fakterman and Shachar Roitman

URL: https://origin-unit42.paloaltonetworks.com/active-directory-certificate-services-exploitation/

ONE SENTENCE SUMMARY:

AD CS misconfigurations enable stealthy certificate-based privilege escalation and persistence, detectable through correlated telemetry, behavioral analytics, and targeted Windows event monitoring.

MAIN POINTS:

  1. AD CS underpins PKI authentication and encryption but often ships with insecure defaults.
  2. Misconfigured certificate templates can grant unintended, long-lived privileged authentication capabilities.
  3. Adversaries exploit native issuance workflows rather than zero-days or malware.
  4. Under-monitoring and configuration complexity create persistent blind spots for defenders.
  5. Attack lifecycle spans initial access, discovery, exploitation, escalation, lateral movement, and persistence.
  6. ESC1 abuses templates allowing low-privileged enrollment with SAN control and auth EKUs.
  7. Shadow credentials persist by adding attacker keys to msDS-KeyCredentialLink for passwordless access.
  8. PKINIT enables Kerberos ticket requests using certificates, facilitating impersonation and lateral movement.
  9. Tools like Certify, Certipy, Whisker, and PKINITtools industrialize AD CS exploitation.
  10. Detection requires correlating certificate events, LDAP reconnaissance, directory changes, and Kerberos activity.

TAKEAWAYS:

  1. Harden templates by removing broad enrollment rights and disabling ENROLLEE_SUPPLIES_SUBJECT where unnecessary.
  2. Investigate mismatches between requester identity and issued certificate subject as strong abuse indicators.
  3. Monitor Event IDs 4886/4887/4898/5136/4768/4769 plus LDAP client/server query logs.
  4. Treat unusual LDAP enumeration of pKICertificateTemplate and msDS-KeyCredentialLink as early warning.
  5. Combine posture management with behavior-based detection to catch stealthy, certificate-driven persistence.

Active attack: Dirty Frag Linux vulnerability expands post-compromise risk

Source: Microsoft Security Blog

Author: Microsoft Defender Security Research Team

URL: https://www.microsoft.com/en-us/security/blog/2026/05/08/active-attack-dirty-frag-linux-vulnerability-expands-post-compromise-risk/

ONE SENTENCE SUMMARY:

Dirty Frag is a Linux local privilege escalation exploiting esp4/esp6 and rxrpc kernel components, enabling reliable root escalation post-compromise.

MAIN POINTS:

  1. Newly disclosed LPE “Dirty Frag” targets Linux kernel networking and memory-fragment handling.
  2. Affected components include esp4, esp6 (CVE-2026-43284), and rxrpc (CVE-2026-43500).
  3. Public PoCs suggest higher reliability than timing-sensitive race-condition Linux escalation techniques.
  4. Attacks typically follow initial access via SSH, web-shells, container escape, or low-privileged accounts.
  5. Impacted ecosystems include Ubuntu, RHEL, CentOS Stream, AlmaLinux, Fedora, openSUSE, and OpenShift.
  6. Microsoft Defender is monitoring related activity and developing detections and protections.
  7. Root access enables disabling security tools, credential theft, log tampering, lateral movement, and persistence.
  8. Multiple kernel attack paths improve consistency across vulnerable environments.
  9. Exploit behavior resembles CopyFail (CVE-2026-31431) via page cache manipulation, with added paths.
  10. Exposure increases where IPsec/VPN and xfrm-related functionality keeps vulnerable modules enabled.

TAKEAWAYS:

  1. Treat any foothold on vulnerable Linux hosts as potentially becoming root quickly.
  2. Reduce attack surface by disabling unused rxrpc and, if feasible, esp/xfrm functionality.
  3. Limit unnecessary local shell availability and harden container boundaries to slow post-compromise escalation.
  4. Monitor aggressively for anomalous privilege changes and kernel-module load/unload activity.
  5. Prepare rapid kernel patch deployment once vendor advisories and fixed builds are available.

Day Zero Readiness: The Operational Gaps That Break Incident Response

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/05/day-zero-readiness-operational-gaps.html

ONE SENTENCE SUMMARY:

Incident response readiness requires pre-provisioned access, tested workflows, clear authority, resilient communications, and adequate logging to act immediately.

MAIN POINTS:

  1. Retainers ensure availability, but operational readiness enables immediate, meaningful incident work.
  2. Early response delays increase attacker dwell time, impact breadth, and recovery costs.
  3. Paper plans don’t equal readiness; speed depends on practiced, executable procedures.
  4. Day Zero priorities are visibility first, then authority for containment actions.
  5. Identity access is most urgent to map blast radius and compromised credentials.
  6. Cloud/SaaS visibility must be immediate because audit telemetry can be ephemeral.
  7. EDR investigator access enables fast host-wide querying and reliable containment decisions.
  8. Centralized logging needs sufficient retention; ninety days minimum supports reconstruction.
  9. Breach conditions require out-of-band communications and a designated incident manager.
  10. Pre-approved access policies must specify triggers, roles, approvals, time-boxing, and revocation.

TAKEAWAYS:

  1. Pre-create dormant IR accounts with MFA across IdP, cloud, EDR, and SIEM.
  2. Eliminate Day Zero legal/procurement friction through pre-cleared external responder access.
  3. Test activation end-to-end via tabletop exercises, timing visibility and containment steps.
  4. Ensure backups are isolated and restorations are validated against attacker reach.
  5. Maintain asset inventory and network maps to reduce investigative blind spots.

Palo Alto PAN-OS Flaw Under Active Exploitation Enables Remote Code Execution

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/05/palo-alto-pan-os-flaw-under-active.html

[‘## ONE SENTENCE SUMMARY:\nPalo Alto Networks warns CVE-2026-0300 enables unauthenticated root RCE via PAN-OS Captive Portal, exploited, unpatched until May 13, 2026.\n\n## MAIN POINTS:\n1. Palo Alto Networks issued an advisory for a critical PAN-OS buffer overflow vulnerability. \n2. CVE-2026-0300 allows unauthenticated remote code execution with root privileges. \n3. Exploitation occurs through specially crafted packets targeting the User-ID Authentication Portal. \n4. CVSS is 9.3 when the portal is internet/untrusted-network accessible. \n5. Severity drops to 8.7 if access is restricted to trusted internal IPs. \n6. Palo Alto observed limited in-the-wild exploitation against publicly exposed portals. \n7. Affected platforms include PA-Series and VM-Series firewalls using the portal. \n8. Impacted PAN-OS branches span 10.2, 11.1, 11.2, and 12.1 before listed fixed builds. \n9. No patch is currently available; fixes are planned starting May 13, 2026. \n10. Recommended mitigations are restricting portal access to trusted zones or disabling it. \n\n## TAKEAWAYS:\n1. Internet-exposed Captive Portal configurations materially increase risk of full device compromise. \n2. Unauthenticated root-level RCE demands immediate defensive configuration changes, not waiting for patches. \n3. Validate whether User-ID Authentication Portal is enabled across PA/VM fleets and identify exposures. \n4. Prioritize upgrading to upcoming fixed releases once available across all impacted PAN-OS versions. \n5. Enforcing least-exposure best practices for management/sensitive portals reduces exploitability significantly.’]

Before the Breach, There Was a Test Environment

Source: Qualys Security Blog

Author: Amit Patil

URL: https://blog.qualys.com/qualys-insights/2026/05/06/before-the-breach-there-was-a-test-environment-qa-cloud-security

[‘## ONE SENTENCE SUMMARY:\nCloud risk often originates in QA environments, where temporary infrastructure, misconfigurations, and excessive entitlements persist, requiring integrated security controls.\n\n## MAIN POINTS:\n1. Breaches surface in production, but enabling decisions typically occur earlier in QA.\n2. Temporary test infrastructure frequently becomes permanent, creating shadow assets and exposure.\n3. Internet-facing QA tools like Jenkins attract attackers because they look non-eventful.\n4. QA teams now shape enterprise security via provisioning, CI/CD, and automation frameworks.\n5. Cloud accelerates template reuse, causing risky configurations to propagate across environments.\n6. Four primary QA risk areas include configuration, identity, workloads, and Infrastructure as Code.\n7. CSPM reduces exposure by enforcing benchmarks and detecting drifted or insecure configurations.\n8. CIEM reveals entitlement sprawl where deployment privileges quietly become lasting permissions.\n9. CWP finds vulnerable dependencies, exposed secrets, and runtime compromise within test workloads.\n10. Combined prevention and detection improve outcomes through IaC security and behavioral CDR monitoring.\n\n## TAKEAWAYS:\n1. Treat QA as a strategic security control point, not a lower-risk “non-production” zone.\n2. Eliminate public exposure and weak access controls in test infrastructure before attackers find them.\n3. Enforce least privilege for pipelines and service accounts to minimize blast radius.\n4. Scan containers and automation dependencies continuously as production-grade workloads.\n5. Unify posture, entitlement, workload, IaC, and runtime signals to prioritize true business risk.’]

​​Microsoft named an overall leader in KuppingerCole Analyst’s 2026 Emerging AI Security Operations Center (SOC) report ​​

Source: Microsoft Security Blog

Author: Rob Lefferts

URL: https://www.microsoft.com/en-us/security/blog/2026/05/06/microsoft-named-an-overall-leader-in-kuppingercole-analysts-2026-emerging-ai-security-operations-center-soc-report/

[‘## ONE SENTENCE SUMMARY:\nSOC automation is shifting from playbooks to agentic, context-aware AI that augments analysts, prioritizes incidents, and speeds response.\n\n## MAIN POINTS:\n1. Security operations effectiveness now hinges on converting context into scalable action.\n2. KuppingerCole’s 2026 AI SOC report emphasizes intelligence-driven automation across the lifecycle.\n3. Human capacity, not alert volume, is the primary SOC constraint.\n4. Microsoft is named Overall Leader and Market Leader in the AI SOC market.\n5. Legacy SOAR automated predictable tasks via static rules and predefined workflows.\n6. Analysts still waste time correlating alerts, triaging benign incidents, and repeating investigations.\n7. Built-in automation uses ML, LLMs, and agents to streamline analyst workflows.\n8. Automatic attack disruption limits lateral movement while keeping teams in control.\n9. Phishing triage agent evaluates semantics, URLs, files, and intent to reduce false positives.\n10. Agentic SOC investments enable reasoning, summarization, correlation, and actions with human oversight.\n\n## TAKEAWAYS:\n1. Prioritize platforms that embed automation directly into analyst experiences, not as add-ons.\n2. Favor adaptive automation that handles novel threats beyond deterministic playbooks.\n3. Use ML-based prioritization to focus analysts on highest-impact incidents first.\n4. Deploy agent-assisted triage and disruption to reduce dwell time and operational burnout.\n5. Ensure agentic actions include confidence thresholds and governance for human-controlled response.’]

Insights into the clustering and reuse of phone numbers in scam emails

Source: Cisco Talos Blog

Author: Omid Mirzaei

URL: https://blog.talosintelligence.com/insights-into-the-clustering-and-reuse-of-phone-numbers-in-scam-emails/

[‘## ONE SENTENCE SUMMARY:\nTalos analyzes scam-email phone-number IOCs, revealing VoIP-driven reuse, rotation, clustering, and defenses to expose call-center infrastructure across brands and lures.\n\n## MAIN POINTS:\n1. Cisco Talos now tracks phone numbers in emails as additional IOCs.\n2. TOAD scams move victims from email to calls for coercion and malware.\n3. VoIP dominates campaigns because APIs enable cheap, scalable, hard-to-trace provisioning.\n4. Providers split into wholesalers, retailers, CPaaS, UCaaS; CPaaS most abused.\n5. Sinch appeared most commonly abused; Verizon and NUSO least abused in study.\n6. Analysis found 1,652 unique numbers; 57 reused on consecutive days.\n7. Typical reuse spans two days; maximum observed consecutive reuse lasted four days.\n8. Cool-down gaps extend operational continuity; median number lifespan measured about 14 days.\n9. Recycling numbers across brands, subjects, PDFs, HEIC, JPEG increases reach and bypasses filters.\n10. Sequential DID blocks and clustering by shared numbers reveal organized call-center infrastructure.\n\n## TAKEAWAYS:\n1. Shift investigations toward phone-number intelligence to anchor and connect otherwise ephemeral campaigns.\n2. Build block-level correlation to surface sequential DID allocation patterns and shared scam infrastructure.\n3. Coordinate with CPaaS/VoIP providers to disrupt API-driven provisioning pipelines used by attackers.\n4. Tune detections for rotation and cool-down behavior instead of relying solely on sender reputation.\n5. Combine NLP-driven email analysis with attachment-format inspection to catch diverse TOAD lures.’]

Why most zero-trust architectures fail at the traffic layer

Source: CISOs step up to the security workforce challenge | CSO Online

Author: unknown

URL: https://www.csoonline.com/article/4166689/why-most-zero-trust-architectures-fail-at-the-traffic-layer-2.html

[‘## ONE SENTENCE SUMMARY:\nZero trust often fails because identity policies are strong, but traffic-layer ingress, TLS, mTLS, validation, and visibility enforcement are inconsistent.\n\n## MAIN POINTS:\n1. Many enterprises adopt zero trust with heavy investment in identity and policy tooling.\n2. Incident investigations reveal uncertainty about how malicious traffic entered despite controls.\n3. Implementations overemphasize identity verification while undersecuring traffic entry and movement.\n4. Traffic-layer components include ingress paths, load balancers, gateways, TLS, and service communication.\n5. Inconsistent ownership across network, security, and application teams creates enforcement gaps.\n6. Permissive edges persist, including outdated TLS versions and weak cipher configurations.\n7. Fragmented ingress via CDNs, load balancers, legacy endpoints, and APIs causes inconsistent behavior.\n8. Partial mutual TLS deployments terminate and re-establish connections with weaker internal assumptions.\n9. East-west traffic is frequently treated as trusted once inside the environment.\n10. Limited telemetry prevents teams from tracing request paths during incident response.\n\n## TAKEAWAYS:\n1. Treat traffic handling as the practical enforcement point for zero-trust security.\n2. Standardizing ingress reduces bypasses created by multiple inconsistent entry paths.\n3. Enforcing strict TLS baselines at the edge closes common, avoidable exposure.\n4. End-to-end mTLS and request normalization strengthen continuous trust validation.\n5. Consistent telemetry enables effective incident response by tracing requests across the environment.’]

AI Isn’t the Risk, Uncontrolled AI Is

Source: Varonis Blog

Author: David Gibson

URL: https://www.varonis.com/blog/securing-ai

[‘## ONE SENTENCE SUMMARY:\nAI adoption amplifies dormant data risks, requiring integrated inventory, posture, runtime, compliance, TPRM, and data-layer security controls.\n\n## MAIN POINTS:\n1. Rapid AI deployment outpaces security, exposing sensitive enterprise data to AI tools.\n2. The “3% paradox” forces balancing AI value against machine-speed data exposure.\n3. AI amplifies existing risks like excessive permissions, not creating fundamentally new ones.\n4. AI-layer controls alone fail because real damage occurs at the underlying data layer.\n5. Effective inventory needs static scanning plus runtime prompt-based discovery of hidden dependencies.\n6. Dependency mapping must trace endpoint-to-data chains to understand true risk exposure.\n7. Posture assessment spans code, configuration drift, agentic risks, data exposure, and model weaknesses.\n8. Continuous red teaming validates exploitability, covering prompt injection, jailbreaks, and indirect injection attacks.\n9. Unified runtime guardrails and monitoring reduce latency, gaps, and enable SIEM/SOAR-ready auditing.\n10. Complete security requires continuous data classification, identity/permission mapping, remediation, and cross-store activity monitoring.\n\n## TAKEAWAYS:\n1. Treat data permissions and placement as primary AI security controls, not secondary hygiene.\n2. Combine runtime telemetry with inventory to maintain an accurate, living AI dependency map.\n3. Validate protections continuously by integrating adversarial testing into CI/CD for models, prompts, and tools.\n4. Automate compliance and vendor assessments using security evidence, not manual questionnaires and snapshots.\n5. Close the AI-security gap by securing AI systems and the entire data estate together, continuously and in context.’]

ChatGPT advanced account security adds passkeys and hardware keys

Source: Help Net Security

Author: Anamarija Pogorelec

URL: https://www.helpnetsecurity.com/2026/05/04/openai-chatgpt-advanced-account-security/

ONE SENTENCE SUMMARY:

OpenAI’s Advanced Account Security makes ChatGPT/Codex logins phishing-resistant via passkeys/security keys, tighter sessions, no support recovery, and training exclusion.

MAIN POINTS:

  1. OpenAI launched an opt-in Advanced Account Security setting for ChatGPT and Codex accounts.
  2. Enabling it disables password-based sign-in, requiring passkeys or physical security keys.
  3. Removing passwords reduces susceptibility to phishing and credential-stuffing attacks.
  4. Email and SMS recovery are eliminated to prevent takeover via compromised inboxes or phone numbers.
  5. Account recovery relies only on user-held backup passkeys, security keys, and recovery keys.
  6. OpenAI Support cannot restore access after enrollment, shifting recovery responsibility to users.
  7. Shorter sign-in sessions limit exposure from stolen devices or hijacked active sessions.
  8. One enrollment applies across both ChatGPT and Codex under the shared login.
  9. Conversations from enrolled accounts are excluded from model training automatically.
  10. Trusted Access for Cyber individuals must enable it by June 1, 2026, or use phishing-resistant SSO attestation.

TAKEAWAYS:

  1. Prioritize multiple backup authentication factors before enabling to avoid permanent lockout.
  2. Eliminating SMS/email recovery closes common account takeover routes tied to SIM-swaps and inbox compromise.
  3. FIDO2/WebAuthn-based methods align ChatGPT security with major platforms’ phishing-resistant standards.
  4. Hardware key bundles (e.g., dual YubiKeys) support primary-plus-backup operational resilience.
  5. Security-sensitive users gain default assurance their chats won’t be used for training without manual settings.

Microsoft Defender wrongly flags DigiCert certs as Trojan:Win32/Cerdigent.A!dha

Source: BleepingComputer

Author: Lawrence Abrams

URL: https://www.bleepingcomputer.com/news/security/microsoft-defender-wrongly-flags-digicert-certs-as-trojan-win32-cerdigentadha/

ONE SENTENCE SUMMARY:

Microsoft Defender falsely flagged DigiCert root certificates as Trojan:Win32/Cerdigent.A!dha, removing trust-store entries before Microsoft fixed signatures.

MAIN POINTS:

  1. Defender signature update on April 30 triggered global false-positive detections, reported by Florian Roth.
  2. Legitimate DigiCert root certificates were labeled Trojan:Win32/Cerdigent.A!dha, alarming administrators and users.
  3. Affected Windows systems removed certificates from the AuthRoot trust store automatically.
  4. Impacted registry path was HKLM\SOFTWARE\Microsoft\SystemCertificates\AuthRoot\Certificates.
  5. Reported certificate thumbprints included 0563B8630D62D75ABBC8AB1E4BDFB5A899B24D43.
  6. Second flagged thumbprint was DDFB16CD4931C973A2037D3FC83A4D7D775D05E4.
  7. Microsoft corrected detections in Security Intelligence update 1.449.430.0; later update 1.449.431.0 followed.
  8. Reddit users indicated the fix also restored previously removed root certificates.
  9. Users can force Defender updates via Windows Security “Protection updates” and “Check for Updates.”
  10. Timing coincided with DigiCert’s incident where attackers obtained EV code-signing certs used for malware.

TAKEAWAYS:

  1. False positives can directly disrupt Windows trust stores, potentially breaking TLS and software validation.
  2. Rapid signature rollouts need robust safeguards to avoid widespread certificate trust removals.
  3. Updating Defender intelligence quickly resolves misdetections and may automatically restore trust entries.
  4. DigiCert’s breach involved initialization codes and approved orders, enabling issuance of maliciously used certs.
  5. Defender’s flagged roots differed from revoked code-signing certificates, so linkage remains unconfirmed.

NCUA Cybersecurity Exam Prep 2026: What RISOs Say Examiners Look For

Source: Rivial Security Blog

Author: Lucas Hathaway

URL: https://www.rivialsecurity.com/blog/ncua-cybersecurity-exam-prep-2026-what-risos-say-examiners-look-for

ONE SENTENCE SUMMARY:

NCUA exams emphasize quantitative risk assessment maturity, then scrutinize access controls, vendor incident response, AI governance, and board-level reporting.

MAIN POINTS:

  1. Quantitative, dollar-based risk assessment is the foundational expectation regardless of asset size.
  2. Financially quantified risk improves board engagement and supports ROI-based security investment decisions.
  3. Examiners expect formal, documented risk acceptance with board sign-off when controls aren’t implemented.
  4. A complete risk register should map threats, likelihood, inherent risk, controls, and residual risk.
  5. Access control weaknesses are the top 2025 deficiency, aligning with common breach patterns.
  6. Cloud MFA gaps, especially Microsoft 365, frequently trigger findings; privileged MFA is the minimum.
  7. Unconstrained PowerShell enables ransomware; constrained mode, allow listing, and logging are expected.
  8. Application allow listing is becoming a baseline control to reduce zero-day and AI-accelerated exploitation.
  9. Vendor breach response must be contractually defined, including notification timelines and cooperation duties.
  10. Effective governance includes AI policy, use-case risk assessments, data mapping, and disciplined board reporting.

TAKEAWAYS:

  1. Adopt quantitative cyber risk methods to translate security priorities into board-relevant financial outcomes.
  2. Close access control findings fastest by enforcing MFA, hardening PowerShell, and allow-listing execution.
  3. Prevent vendor-driven exam issues by embedding incident response obligations directly into vendor contracts.
  4. Prepare for AI scrutiny with policy, phased rollouts, and per-use-case controls across vendor and internal AI.
  5. Clean exams correlate with investing in external research and technical guidance, not improvising internally.

Microsoft now lets admins choose pre-installed Store apps to uninstall

Source: BleepingComputer

Author: Sergiu Gatlan

URL: https://www.bleepingcomputer.com/news/microsoft/microsoft-now-lets-admins-choose-pre-installed-store-apps-to-uninstall/

ONE SENTENCE SUMMARY:

BleepingComputer blocks access with a bot-protection verification page requiring JavaScript and cookies before allowing the site to load fully again.

MAIN POINTS:

  1. A security service is used to defend the site from malicious bots.
  2. Visitors are shown an interstitial page during automated verification checks.
  3. The page indicates the website is verifying the requester is not a bot.
  4. Verification can complete successfully before the destination page responds.
  5. Access may pause while waiting for www.bleepingcomputer.com to load content.
  6. JavaScript must be enabled to proceed past the verification screen.
  7. Browser cookies are required to continue the session validation.
  8. The message implies bot-detection controls are actively enforced on the domain.
  9. The experience resembles common anti-abuse protections used by web security providers.
  10. Users cannot reach the intended content until verification requirements are satisfied.

TAKEAWAYS:

  1. Bot-mitigation gateways can temporarily block human users during checks.
  2. Enabling JavaScript and cookies is often mandatory for modern access controls.
  3. Successful verification doesn’t guarantee immediate page load if the site is slow.
  4. Anti-bot tooling relies on browser capabilities to distinguish automated traffic.
  5. Security verification pages are a visible indicator of web application protection in place.

ClickFix Removes Your Background but Leaves the Malware

Source: Huntress Blog

Author: unknown

URL: https://www.huntress.com/blog/clickfix-castleloader-backgroundfix

ONE SENTENCE SUMMARY:

A website displays a cookie notice and navigation links but returns a page error, prompting users to browse elsewhere.

MAIN POINTS:

  1. Cookie usage is disclosed to enhance viewing experience.
  2. A Cookie Policy link is provided for more details.
  3. Marketing message warns against overlooked obligations becoming incidents.
  4. Portal Login option appears in the site header.
  5. Support section is accessible from main navigation.
  6. Blog link is included for additional content.
  7. Contact page is available for user inquiries.
  8. Search function is presented in the navigation bar.
  9. Calls-to-action include “Get a Demo” and “Start for Free.”
  10. Error page invites users to go back home and continue browsing.

TAKEAWAYS:

  1. Cookie transparency is implemented through a policy reference.
  2. Core site navigation remains visible despite the error.
  3. Conversion paths are emphasized even on failure pages.
  4. Error handling reassures users and encourages continued engagement.
  5. Messaging frames compliance gaps as potential incident drivers.

Bridging the gap: How to integrate Claude Security into the Tenable One Exposure Management Platform

Source: Tenable Blog

Author: Liat Hayun

URL: https://www.tenable.com/blog/how-to-integrate-claude-security-into–tenable-one

ONE SENTENCE SUMMARY:

Integrate Claude Security with Tenable One to normalize AI findings, reduce noise, unify attack surface, and prioritize remediation efficiently.

MAIN POINTS:

  1. Frontier AI accelerates vulnerability discovery, shifting bottlenecks to prioritization and remediation.
  2. Siloed AI findings increase triage workload and obscure true business risk.
  3. Tenable One centralizes Claude’s deep-logic code analysis with broader exposure context.
  4. Unified visibility converts raw AI outputs into actionable intelligence and remediation plans.
  5. Initial workflow starts by scanning a chosen repository branch using Claude Security.
  6. Findings are exported as CSV, though automation is recommended for scalability.
  7. Webhooks, scheduled scans, and S3 enable near real-time continuous data delivery.
  8. Tenable One Open Connector ingests Claude data to keep a single pane of glass.
  9. “Override Data (Full Fetch)” refreshes truth, removing remediated issues and preventing stale vulnerabilities.
  10. Attribute mapping and aggregation group by root cause to avoid inflated exposure scores.

TAKEAWAYS:

  1. Measure success by response speed and accuracy, not sheer finding volume.
  2. Contextualizing code risks within exposure management improves business-aligned prioritization.
  3. Automating ingestion prevents manual processes from collapsing under AI-scale discovery.
  4. Correct field mapping makes AI results usable for Tenable risk scoring and workflows.
  5. Root-cause aggregation reduces duplicate alerts and focuses remediation on critical weaknesses.

Stopping the quiet drift toward excessive agency with re-permissioning

Source: Stopping the quiet drift toward excessive agency with re-permissioning | CSO Online

Author: unknown

URL: https://www.csoonline.com/article/4165067/stopping-the-quiet-drift-toward-excessive-agency-with-re-permissioning.html

ONE SENTENCE SUMMARY:

As LLMs become executing agents, organizations must control permissions, visibility, and supply-chain risk to prevent unauthorized actions at scale.

MAIN POINTS:

  1. Early LLM failures were mostly harmless text issues, not operational security incidents.
  2. Agentic AI now connects tools, databases, and systems to perform multi-step actions.
  3. Security focus shifts from model capability to internal treatment, permissioning, and governance.
  4. Unauthorized actions matter more than hallucinations when agents have autonomy and access.
  5. MCP and agent-to-agent interoperability expand reach, increasing systemic attack surface.
  6. Rapid enterprise adoption outpaces formal assessments, creating a growing security gap.
  7. Cross-system workflows obscure root cause, making auditing and blame assignment difficult.
  8. Over-permissioning is common, giving agents unnecessary access and excessive operational agency.
  9. Key risks include black-box decisions, human overreliance, and upstream tool/data manipulation.
  10. Re-permissioning requires continuous audits, least privilege, human oversight, and secure integrations.

TAKEAWAYS:

  1. Treat agents like operational actors, not chatbots, because they execute real changes.
  2. Reduce autonomy risk by eliminating unnecessary tool/API access and enforcing least privilege.
  3. Improve governance with end-to-end visibility, logging, irregular-behavior detection, and audits.
  4. Require human-in-the-loop approvals for sensitive data, finance, access changes, and major updates.
  5. Harden the agent supply chain by vetting, patching, and tightly controlling third-party integrations.

AI Inventory Template for Financial Institutions | Rivial Security

Source: Rivial Security Blog

Author: Lucas Hathaway

URL: https://www.rivialsecurity.com/blog/ai-inventory-template

ONE SENTENCE SUMMARY:

Financial institutions need a living AI inventory to track AI usage, ownership, data, risks, controls, and evidence for governance.

MAIN POINTS:

  1. AI inventories provide a governed system of record, not a static spreadsheet.
  2. NIST AI RMF Govern 1.6 calls for inventory mechanisms aligned to risk priorities.
  3. Scope must include internal models, embedded vendor AI, and employee-used generative tools.
  4. Undocumented AI creates gaps in data handling, accountability, explainability, and control ownership.
  5. Interagency third-party risk guidance requires lifecycle oversight even when AI is outsourced.
  6. Executive reporting improves by slicing inventory data by unit, tier, vendors, and control maturity.
  7. Core fields include owners, purpose, vendor/build type, data sensitivity, and outputs influenced.
  8. Risk-tiering enables proportionate reviews based on impact, sensitivity, oversight, and regulatory exposure.
  9. Inventory value increases when linked to approvals, workflows, control mapping, and evidence locations.
  10. Common failures include missing vendor AI, lacking ownership, ignoring data context, and omitting control linkage.

TAKEAWAYS:

  1. Build inventories to support governance decisions, not to “complete a checkbox.”
  2. Capture third-party and embedded AI to avoid false completeness about institutional exposure.
  3. Assign both business and technical/security ownership to ensure updates and remediation happen.
  4. Record input data types and sensitivity to drive privacy, security, and compliance requirements.
  5. Keep review dates/status and evidence pointers so audits, exams, and boards get defensible answers.

8 best practices for CISOs conducting risk reviews

Source: Microsoft Security Blog

Author: Rico Mariani

URL: https://www.microsoft.com/en-us/security/blog/2026/04/29/8-best-practices-for-cisos-conducting-risk-reviews/

ONE SENTENCE SUMMARY:

Microsoft Deputy CISO Rico Mariani outlines eight structured risk-review areas to shift security from reactive fixes toward proactive Zero Trust controls.

MAIN POINTS:

  1. Start by identifying and scoping the critical assets attackers most want.
  2. Enumerate all applications and microservices that expose interfaces and reach assets.
  3. Prefer standards-based token authentication using proven issuers like Microsoft Entra.
  4. Minimize token power through fine-grained scoping, short lifetimes, and limited audiences.
  5. Enforce authorization consistently with declarative patterns to reduce code bugs.
  6. Apply strong network isolation to constrain lateral movement and limit reachable systems.
  7. Build threat-model-driven detections across perimeter and internal signals to alert on attacks.
  8. Maintain robust auditing logs to determine breach extent, impact, and notification needs.
  9. Include overlooked areas like backups, support systems, and privileged operational tools.
  10. Scrutinize development and test environments because buggy code can expose production assets.

TAKEAWAYS:

  1. Consistent risk-review questions convert security data into proactive posture improvements.
  2. Least-privilege tokens and standard libraries shrink blast radius after inevitable compromise.
  3. Simple, repeatable authorization patterns reduce exploitable mistakes in enforcement logic.
  4. Segmentation plus logging makes attacker footholds less useful and improves hunting.
  5. Comprehensive inventories must cover backups, support, and nonproduction systems to avoid blind spots.

The Money Mule Problem Solution: What Every Scam Has in Common

Source: Recorded Future

Author: unknown

URL: https://www.recordedfuture.com/blog/money-mule-solution

ONE SENTENCE SUMMARY:

Scams cost $450B–$1T globally; proactive mule-account intelligence via agentic engagement helps institutions prevent payments amid rising reimbursement regulations.

MAIN POINTS:

  1. Global scam losses range from $450B to $1T annually.
  2. Scams differ from card fraud by requiring no data breach.
  3. Victims are persuaded to authorize and send money themselves.
  4. Mule accounts provide the critical exit point for scam proceeds.
  5. Targeting mule infrastructure is more stable than chasing individual scam tactics.
  6. Pre-transaction intelligence enables more actionable prevention than post-transaction behavioral monitoring.
  7. CYBERA deploys agentic personas to interact directly with active scammers.
  8. Engagement aims to extract mule account details used for laundering funds.
  9. Collected information is verified intelligence, not probabilistic risk scoring.
  10. Regulatory trends increase institutional liability for APP fraud reimbursement across multiple countries.

TAKEAWAYS:

  1. Prioritize disrupting mule accounts to materially reduce scam success rates.
  2. Invest in pre-transaction intelligence rather than relying solely on anomaly detection.
  3. Direct adversary engagement can yield higher-confidence indicators than scoring models.
  4. Prepare for expanding reimbursement regimes by strengthening proactive controls now.
  5. Treat scam prevention as a strategic risk issue, not just a traditional fraud problem.

Microsoft Patches Entra ID Role Flaw That Enabled Service Principal Takeover

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/04/microsoft-patches-entra-id-role-flaw.html

ONE SENTENCE SUMMARY:

Silverfort found Entra’s Agent ID Administrator role allowed service principal takeovers, enabling privilege escalation until Microsoft patched scope checks globally.

MAIN POINTS:

  1. Microsoft introduced Agent ID Administrator to manage AI agent identities’ full lifecycle.
  2. The agent identity platform supports secure authentication, resource access, and agent discovery.
  3. Silverfort discovered role holders could assign themselves ownership of arbitrary service principals.
  4. Ownership enabled attackers to add credentials and authenticate as the hijacked principal.
  5. Compromised principals let adversaries act within whatever permissions the principal already had.
  6. Privileged service principals could grant directory roles or high-impact Microsoft Graph permissions.
  7. Researcher Noa Ariel described the issue as “full service principal takeover.”
  8. Responsible disclosure occurred March 1, 2026, with remediation deployed April 9 across clouds.
  9. Post-fix attempts to target non-agent service principals now fail with a “Forbidden” error.
  10. The case underscores scoping validation risks when building new identities atop shared primitives.

TAKEAWAYS:

  1. Treat service principal ownership as a high-risk capability requiring tight governance.
  2. Confirm built-in role scopes match intended identity types, especially for emerging agent identities.
  3. Track and investigate changes to service principal owners as potential takeover indicators.
  4. Audit service principal credential creation and modifications to detect unauthorized persistence.
  5. Strengthen tenant posture by hardening and reviewing all privileged service principals regularly.

The Data Toilet: Why Your SIEM Strategy is Failing (and How to Fix It)

Source: CISO Tradecraft® Newsletter

Author: CISO Tradecraft

URL: https://cisotradecraft.substack.com/p/the-data-toilet-why-your-siem-strategy

ONE SENTENCE SUMMARY:

SIEM success demands prioritized detections, honest testing, sustainable pricing, portable data architecture, and resisting lock-in and vanity metrics today globally.

MAIN POINTS:

  1. Epic SIEM failures stem from complexity, broken ingestion, and license caps during breaches.
  2. Prioritizing executive dashboards and compliance over detection engineering creates a useless “data toilet.”
  3. Aggressive log truncation and skipping DHCP data leave investigations blind when incidents occur.
  4. Gartner Magic Quadrant influence is long-term “osmosis,” not simple pay-to-play bribery.
  5. Selecting a SIEM should match organizational identity, from conservative banks to fast startups.
  6. SaaS SIEMs create “Hotel California” lock-in via data gravity, egress fees, and audits.
  7. Decoupled storage-plus-analytics boosts AI readiness but adds vendors, latency, and real-time challenges.
  8. Pipeline tooling is essential to prevent overflow, maintain fidelity, and avoid costly data rationing.
  9. Per-alert pricing can incentivize suppressing visibility, recreating blind spots despite shifting cost models.
  10. Outcome-based “bridge stress tests” beat MITRE coverage games for measuring real detection effectiveness.

TAKEAWAYS:

  1. Invest in data completeness and detection engineering before dashboards and checkbox compliance.
  2. Evaluate vendors by exit costs, retention requirements, and migration feasibility, not introductory discounts.
  3. Demand transparent pricing aligned to infrastructure realities, avoiding per-alert or per-GB rationing incentives.
  4. Consider decoupled architectures only with plans for latency, pipelines, and multi-vendor operations.
  5. Measure security with scoped adversary tests and published failure thresholds, replacing “MITRE Bingo.”