Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short

Source: Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short | CSO Online

Author: unknown

URL: https://www.csoonline.com/article/4141873/only-30-minutes-per-quarter-on-cyber-risk-why-ciso-board-conversations-are-falling-short.html

ONE SENTENCE SUMMARY:

Report finds board-CISO cybersecurity discussions are brief, passive, and insufficiently forward-looking, especially regarding AI-driven threats and strategic risk decisions.

MAIN POINTS:

  1. Enterprise boards increasingly include cybersecurity, yet conversations remain superficial and time-boxed.
  2. Typical CISO-board interaction lasts 30 minutes per quarter, limiting meaningful engagement.
  3. Only 30% of boards rate relationships with CISOs as strong and collaborative.
  4. Most CISOs report quarterly, but updates are often routed through committees.
  5. Limited follow-through makes cybersecurity feel like a briefing rather than exploration.
  6. Extended airtime correlates with strategic dialogue on trade-offs, risk tolerance, and decisions.
  7. Directors understand regulatory trends and current initiatives better than emerging AI threats.
  8. AI amplifies attack sophistication while creating new high-value assets and loss scenarios.
  9. Less than half of boards join simulations or tabletop exercises, keeping oversight passive.
  10. Effective CISOs tie cyber narratives to business risk, ROI, and enterprise strategy.

TAKEAWAYS:

  1. Prioritize longer, discussion-oriented board sessions to enable strategic cybersecurity decision-making.
  2. Translate cyber metrics into business-impact narratives about risk tolerance and trade-offs.
  3. Provide forward-looking analysis on AI-enabled threats and AI model/asset protection.
  4. Increase board participation in exercises to build experiential understanding of incident dynamics.
  5. Adopt a business-leader posture to shape the cyber agenda around enterprise risks.

mquire: Open-source Linux memory forensics tool

Source: Help Net Security

Author: Anamarija Pogorelec

URL: https://www.helpnetsecurity.com/2026/03/04/mquire-open-source-linux-memory-forensics-tool/

ONE SENTENCE SUMMARY:

Trail of Bits’ mquire enables Linux kernel memory forensics without external symbols using BTF, Kallsyms, and SQL-based querying.

MAIN POINTS:

  1. Traditional Linux memory forensics relies on exact kernel debug symbols that often aren’t available.
  2. mquire analyzes memory dumps without needing external debug repositories or symbol packages.
  3. BTF provides compact kernel type layouts, offsets, and relationships for structure parsing.
  4. Kallsyms addresses are located by scanning dumps, mirroring live /proc/kallsyms functionality.
  5. BTF requires Linux kernel 4.18+ with BTF enabled, common in major distributions.
  6. Kallsyms support requires kernel 6.4+ due to scripts/kallsyms.c format changes.
  7. An interactive SQL interface, inspired by osquery, enables intuitive forensic exploration.
  8. Queries can join processes, open files, dentries, and network connections for correlated analysis.
  9. Page-cache extraction recovers open or deleted files via .dump, plus raw carving with .carve.
  10. Hidden process detection compares task-list enumeration against PID namespace enumeration strategies.

TAKEAWAYS:

  1. Eliminating external debug symbols reduces failure modes during time-sensitive incident response.
  2. BTF+Kallsyms lets analysts reconstruct kernel structures directly from the dump.
  3. SQL makes complex cross-artifact correlations approachable and repeatable in investigations.
  4. Page-cache recovery can retrieve valuable evidence even after on-disk deletion.
  5. Kernel-only scope limits user-space visibility, and future Kallsyms changes may require tool updates.

Minimum viable probabilistic cyber risk quantification

Source: Ryan McGeehan

Author: unknown

URL: https://r10n.com/mvp-cyber-risk-quantification/

ONE SENTENCE SUMMARY:

A minimum viable, panel-elicited probabilistic method builds annual cyber loss distributions and tail scenarios for iterative, calibration-driven security prioritization.

MAIN POINTS:

  1. Produces incident definition, annual loss distribution, tail-loss taxonomy, and review cadence with scoring loop.
  2. Requires no platforms, minimal time, and works without historical loss datasets.
  3. Starts by defining “incident” using operational triggers like on-call pages or IR activation.
  4. Elicits P50/P90 incident costs, then fits a parametric severity distribution (often lognormal).
  5. Forecasts annual incident counts via P50/P90 to create a frequency distribution.
  6. Combines frequency and severity with Monte Carlo sampling to generate annual loss distribution.
  7. Includes comprehensive cost components such as churn, delivery disruption, sales friction, and regulatory delays.
  8. Uses anonymous-first elicitation and re-elicitation to reduce anchoring, dominance, and bias.
  9. Constructs MECE taxonomy for >P90 “heavy hitter” scenarios, with controlled “other” category usage.
  10. Links every mitigation initiative to scenario classes and updates probabilities/impacts over time.

TAKEAWAYS:

  1. Treat risk quant as an updateable forecast artifact, not a claim of truth.
  2. Fast elicitation plus simple modeling enables early prioritization without becoming a data project.
  3. Tail-loss scenario thinking drives actionable alignment between mitigations and largest potential damages.
  4. Bias-resistant group forecasting improves calibration and decision quality over ad-hoc judgment.
  5. Quarterly refreshes and scoring create a feedback loop that continuously refines assumptions.

The TTX + TTP Replay FAQ: Executive and Practitioner Guide to Evidence-Backed Cyber Defense Validation

Source: Lares

Author: Andrew Heller

URL: https://www.lares.com/blog/ttxttp-faq/

ONE SENTENCE SUMMARY:

Integrating tabletop exercises with TTP replays replaces assumed readiness with quantified control effectiveness, aligning people, process, and technology for defensible cyber resilience.

MAIN POINTS:

  1. Confidence in incident readiness often exceeds real-world decision accuracy during crises.
  2. Traditional security testing stays siloed, creating gaps between plans and technical reality.
  3. Tabletop Exercises evaluate coordination, process maturity, and decisions under pressure.
  4. TTX outcomes depend on unverified assumptions about control behavior and tool performance.
  5. TTP Replays execute real adversary behaviors safely in production to validate detections.
  6. Running only TTX yields theoretical response plans detached from actual telemetry.
  7. Running only TTP Replay produces technical findings lacking executive context and escalation paths.
  8. Integrated TTX+TTP links scenarios to measured outcomes, enabling evidence-backed improvements.
  9. Quantitative metrics include MTTD, MTTR, alert fidelity, and false negative rate.
  10. A five-level maturity model progresses from compliance confidence to continuous validation aligned with CTEM.

TAKEAWAYS:

  1. Capture technical assumptions during tabletops, then test them via adversary emulation playbooks.
  2. Prioritize detection engineering using replay-exposed visibility gaps rather than MITRE “coverage” targets.
  3. Validate ROSI by proving tool effectiveness, enabling tuning, vendor remediation, or budget reallocation.
  4. Strengthen board oversight using objective control-performance data instead of theoretical response narratives.
  5. Support regulatory timelines like SEC 4-day disclosure by combining fast detection validation and materiality decision rehearsal.

Structured analysis for small CTI teams: Using AI to reinforce tradecraft

Source: Feedly Blog

Author: Dave Johnson

URL: https://feedly.com/ti-essentials/posts/structured-analysis-for-small-cti-teams-using-ai-to-reinforce-tradecraft

ONE SENTENCE SUMMARY:

Small CTI teams can use prompt-driven LLM workflows to apply structured analytic techniques quickly, improving rigor, consistency, and defensibility.

MAIN POINTS:

  1. Structured analytic techniques are taught widely but frequently skipped under operational time pressure.
  2. Collaboration-centric SATs clash with remote, understaffed CTI team realities.
  3. Accepting reporting at face value increases bias risk and weakens conclusions.
  4. LLMs can act as sparring partners that challenge assumptions, not replace analysts.
  5. AI assistance can surface assumptions, organize evidence, and generate alternative hypotheses.
  6. Salt Typhoon case study illustrated uncertainty hidden beneath confident attribution narratives.
  7. Key assumptions checks can be accelerated via prompts producing assumption tables and gaps.
  8. ACH prompts help eliminate weaker hypotheses by structuring evidence against alternatives.
  9. Devil’s advocacy prompts generate credible critiques to harden assessments against stakeholder challenges.
  10. Pre-mortems reconstruct failure paths to reveal missing evidence, dependencies, and overconfidence drivers.

TAKEAWAYS:

  1. Lightweight SATs can be completed in roughly 20 minutes using repeatable prompt templates.
  2. Separate sessions per problem reduces anchoring and cross-contamination bias in analysis.
  3. Grounding outputs in curated intelligence and citations improves defensibility and traceability.
  4. Using structured outputs increases clarity, consistency, and auditability of analytic reasoning.
  5. Some structured analysis is better than none when resources prevent full team collaboration.

Securing the Modern Cloud: 5 Best Practices for Protecting Multi-Cloud Workloads

Source: Cloud Security Alliance

Author: unknown

URL: https://cloudsecurityalliance.org/articles/securing-the-modern-cloud-5-best-practices-for-protecting-multi-cloud-workloads

ONE SENTENCE SUMMARY:

Comprehensive cloud security requires CNAPP-based workload protection across multi-cloud environments using continuous scanning, container lifecycle security, compliance automation, and centralized visibility.

MAIN POINTS:

  1. CSPM alone misses workload-layer risks; workloads require dedicated security controls.
  2. Dynamic, distributed architectures expand attack surface across VMs, containers, databases, serverless functions.
  3. Multi-cloud deployments demand consistent visibility and protections across disparate providers.
  4. Workload integrity underpins operational resilience, not only data protection.
  5. CNAPP platforms unify prevention, detection, and response for vulnerabilities, misconfigurations, insecure APIs.
  6. Continuous vulnerability scanning must replace periodic assessments in fast-moving cloud deployments.
  7. Contextual enrichment enables risk-based prioritization beyond raw CVSS severity.
  8. Agentless scanning uses CSP APIs for scalable posture insights without agent management overhead.
  9. Container security should span build-to-runtime, integrating into CI/CD and registry scanning.
  10. Automated compliance monitoring maintains audit readiness amid rapid cloud configuration changes.

TAKEAWAYS:

  1. Shift from infrastructure-only posture management to full workload security coverage.
  2. Favor continuous, context-driven vulnerability management to surface truly exploitable “toxic combinations.”
  3. Use agentless approaches for broad, low-friction multi-cloud workload visibility.
  4. Embed container security into DevOps from build through production runtime.
  5. Centralize exposure management to create a single source of truth for collaboration and prioritization.

Security debt is becoming a governance issue for CISOs

Source: Help Net Security

Author: Mirko Zorz

URL: https://www.helpnetsecurity.com/2026/03/02/ciso-security-debt-report/

ONE SENTENCE SUMMARY:

Veracode’s 2026 report shows growing, aging application security backlogs, urging board-level governance, risk-based prioritization, and automation to reduce exploitable exposure.

MAIN POINTS:

  1. Study analyzed 1.6 million applications using SAST, DAST, SCA, and pen testing.
  2. Security debt means known vulnerabilities unresolved for more than one year.
  3. Organizations with security debt rose to 82% in 2026 from 74%.
  4. Critical security debt increased to 60% of organizations from 50%.
  5. Legacy and business-critical systems slow fixes due to change risk and dependency.
  6. Wysopal advocates board-level KPIs, quarterly targets, and governed risk acceptance.
  7. Suggested policy: fix high-risk vulnerabilities before release, especially crown-jewel applications.
  8. Overall flaw prevalence remained high at 78% of applications in 2026.
  9. Highly severe and exploitable vulnerabilities grew to 11.3% from 8.3%.
  10. Remediation half-life improved slightly to 243 days; third-party critical debt stayed high at 66%.

TAKEAWAYS:

  1. Treat security debt like financial debt with executive oversight and measurable reduction goals.
  2. Prioritize exploitable, high-impact vulnerabilities over raw vulnerability counts.
  3. Focus remediation on crown-jewel applications using fast lanes and strict release gates.
  4. Embed automation and AI-assisted fixes into developer workflows to maintain velocity.
  5. Strengthen supply-chain governance via dependency visibility, update cadences, and ownership clarity.

What to Know About the Notepad++ Supply-Chain Attack

Source: Threat Intelligence Blog | Flashpoint

Author: Flashpoint

URL: https://flashpoint.io/blog/what-to-know-about-the-notepad-supply-chain-attack/

ONE SENTENCE SUMMARY:

CVE-2025-15556 let attackers hijack Notepad++ updates via missing signature checks, enabling Lotus Blossom backdoors, persistence, and data theft.

MAIN POINTS:

  1. Vulnerability resides in Notepad++ WinGUP updater lacking installer signature integrity verification.
  2. Hosting-provider compromise enabled supply-chain tampering beyond simple coding mistakes.
  3. Attackers intercepted WinGUP update requests and redirected them to malicious infrastructure.
  4. MitM techniques and DNS cache poisoning facilitated redirection to attacker-controlled servers.
  5. Trojanized update.exe installers were delivered while appearing as legitimate software patches.
  6. Lotus Blossom campaign operated July–October 2025 across three evolving attack chains.
  7. Early chains deployed Cobalt Strike beacons using NSIS installers and rotating C2 URLs.
  8. Final chain installed Chrysalis backdoor via BluetoothService.exe, log.DLL, and shellcode.
  9. Mapped ATT&CK techniques include DLL hijacking, registry run keys, services, and process injection.
  10. Recommended defenses include patching to v8.9.1+, hunting TTPs, monitoring domains, and hardening endpoints.

TAKEAWAYS:

  1. Prioritize upgrading Notepad++ to v8.9.1+ to enforce signature verification.
  2. Treat software supply-chain risk as infrastructure-dependent, not only code-dependent.
  3. Hunt for persistence artifacts like suspicious DLL loads, run keys, and new services.
  4. Strengthen network controls against redirect-based delivery using domain monitoring and blocking.
  5. Use MITRE ATT&CK mappings to guide detection engineering and proactive threat hunting.

Google Disrupts UNC2814 GRIDTIDE Campaign After 53 Breaches Across 42 Countries

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/02/google-disrupts-unc2814-gridtide.html

ONE SENTENCE SUMMARY:

Google and partners disrupted UNC2814’s China-linked espionage campaign using Google Sheets C2 backdoor GRIDTIDE across governments and telecoms worldwide.

MAIN POINTS:

  1. Google, Mandiant, and partners dismantled suspected China-nexus UNC2814 infrastructure.
  2. Confirmed breaches impacted at least 53 organizations across 42 countries.
  3. Additional suspected infections span more than 20 other nations.
  4. Tracking since 2017 revealed SaaS API calls used as disguised command-and-control.
  5. GRIDTIDE backdoor abuses Google Sheets API to blend C2 within legitimate traffic.
  6. Malware supports file transfer and arbitrary shell command execution on compromised systems.
  7. Initial access likely involves exploiting web servers and edge systems, still under investigation.
  8. Lateral movement utilized service accounts and SSH within victim environments.
  9. LotL binaries enabled reconnaissance, privilege escalation, and persistence via systemd service xapt.
  10. SoftEther VPN Bridge established encrypted outbound connectivity, consistent with other Chinese groups’ tactics.

TAKEAWAYS:

  1. SaaS platforms can be repurposed as stealthy C2 channels via legitimate APIs.
  2. Edge appliances remain high-risk entry points due to exposure and weak detection coverage.
  3. Persistence commonly leverages native services (e.g., systemd) to survive reboots and scrutiny.
  4. Telecom and government sectors face sustained, global-scale espionage with high evasion capability.
  5. Large disruptions may be temporary; defenders should expect rapid attacker reconstitution efforts.

The million-dollar front door and the tailgater: Why strong auth could fail at SaaS session integrity

Source: The Red Canary Blog: Information Security Insights

Author: Nick Weber

URL: https://redcanary.com/blog/security-operations/saas-session-integrity/

ONE SENTENCE SUMMARY:

Strong MFA secures login, but portable SSO sessions remain hijackable; continuous session validation mitigates cookie and token replay attacks.

MAIN POINTS:

  1. Confusing secure authentication with secure access creates a dangerous post-login blind spot.
  2. FIDO2, device trust, UEBA, and conditional access harden the IdP login “front door.”
  3. SAML assertions or OIDC tokens are handed to service providers to enable SSO.
  4. Service providers mint session cookies after validation, ending IdP involvement.
  5. Stolen session cookies grant access because possession effectively equals authentication.
  6. Information-stealer malware commonly exfiltrates browser cookie jars from compromised endpoints.
  7. Device-bound IdP sessions don’t automatically bind downstream SaaS sessions like AWS or Salesforce.
  8. HTTP and federation standards make bearer cookies/tokens portable by design, limiting native defenses.
  9. DPoP/token binding can reduce replay risk, but SaaS support remains sparse.
  10. Defense-in-depth requires shorter TTLs, IP pinning, anomaly detection, and real-time session revocation.

TAKEAWAYS:

  1. Treat session integrity as a separate control plane from login assurance.
  2. Reduce attacker dwell time by tightening service-provider session lifetimes for critical apps.
  3. Constrain replay usefulness by forcing application access through VPN/SSE-controlled IP ranges.
  4. Detect hijacks by correlating IdP “known good” IPs with service-provider session telemetry in a SIEM.
  5. Prioritize vendors implementing Shared Signals Framework for continuous access evaluation and rapid session revocation.

Detecting and mitigating common agent misconfigurations

Source: Microsoft Security Blog

Author: Microsoft Defender Security Research Team

URL: https://www.microsoft.com/en-us/security/blog/2026/02/12/copilot-studio-agent-security-top-10-risks-detect-prevent/

ONE SENTENCE SUMMARY:

Agent misconfigurations in Copilot Studio create hidden access paths; use Defender hunting queries and governance controls to detect, mitigate.

MAIN POINTS:

  1. Rapid agent adoption increases exposure from mis-sharing, unsafe orchestration, and weak authentication.
  2. Broad organizational sharing expands attack surface and enables unintended sensitive actions.
  3. Unauthenticated agents become public entry points enabling unauthorized access and data leakage.
  4. Risky HTTP Request actions bypass connector governance, enabling insecure endpoints and privilege escalation.
  5. Email actions with AI-controlled inputs can enable prompt-injection-driven data exfiltration.
  6. Dormant agents, actions, and connections create forgotten attack surface with stale privileged access.
  7. Author (maker) authentication enables privilege escalation by running under creator permissions.
  8. Hardcoded credentials in topics/actions cause secret leakage and uncontrolled reuse.
  9. MCP tools can introduce undocumented integrations and unintended system interactions without oversight.
  10. Generative orchestration without instructions increases drift, prompt abuse, and unsafe action selection.

TAKEAWAYS:

  1. Run Microsoft Defender Advanced Hunting “AI Agents” community queries to surface misconfigurations early.
  2. Enforce Entra ID authentication and restrict sharing using Managed Environments and environment strategy.
  3. Prefer governed connectors over raw HTTP; apply data/advanced connector policies and enforce HTTPS.
  4. Reduce exfiltration paths by controlling email actions, adding runtime protection, and requiring human approvals.
  5. Establish lifecycle governance: inventory reviews, active ownership, deprecation/quarantine, and Key Vault-backed secrets.

Microsoft adds Copilot data controls to all storage locations

Source: BleepingComputer

Author: Sergiu Gatlan

URL: https://www.bleepingcomputer.com/news/microsoft/microsoft-adds-copilot-data-controls-to-all-storage-locations/

ONE SENTENCE SUMMARY:

Microsoft will extend Purview DLP to block Copilot on local Office files via AugLoop, following a Copilot bug exposing protected email summaries.

MAIN POINTS:

  1. Microsoft is expanding DLP controls to restrict Microsoft 365 Copilot processing confidential Office documents.
  2. Current Purview DLP enforcement applies only to SharePoint and OneDrive-stored files.
  3. Local device Word, Excel, and PowerPoint files were previously outside Copilot DLP coverage.
  4. Deployment will occur via the Augmentation Loop (AugLoop) Office component.
  5. Rollout window is scheduled from late March to late April 2026.
  6. Copilot will be blocked from documents restricted by DLP-based sensitivity labeling.
  7. Organizations with existing Copilot-blocking DLP policies get the change automatically enabled.
  8. Enhancement lets AugLoop read sensitivity labels directly from the Office client.
  9. Earlier approach relied on Microsoft Graph using SharePoint/OneDrive URLs, limiting enforcement scope.
  10. A prior Copilot Chat bug summarized confidential Sent Items and Drafts despite active DLP policies.

TAKEAWAYS:

  1. Uniform DLP enforcement across local and cloud storage reduces Copilot data exposure risk.
  2. AugLoop label retrieval from clients removes dependency on file URLs for protection decisions.
  3. Automatic enablement minimizes administrative effort but increases need for policy validation.
  4. Recent Copilot email summarization bug highlights gaps between intended and actual protection behavior.
  5. Automation platforms like Tines can reduce manual delays and improve incident response reliability.

How to prevent business email compromise

Source: How to prevent business email compromise | CSO Online

Author: unknown

URL: https://www.huntress.com/business-email-compromise-guide/how-to-prevent-business-email-compromise

ONE SENTENCE SUMMARY:

Business email compromise uses targeted social engineering to steal money or data, countered by MFA, verification workflows, monitoring, training, and incident response.

MAIN POINTS:

  1. BEC relies on persuasion, not malware, making it harder for scanners to catch.
  2. Attackers research staff and processes, sometimes hijacking vendor threads to blend in.
  3. Common lures include fake invoices, “CEO” urgency, and payroll or bank-detail changes.
  4. Absence of links/attachments shifts defense toward identity controls and human verification.
  5. Enforcing MFA blocks most credential-stuffing attempts targeting email accounts.
  6. DMARC, DKIM, and SPF checks reduce spoofing; block look-alike domains and mismatched reply-to.
  7. Continuous security awareness training and simulations improve reporting and reduce successful replies.
  8. Dual-approval thresholds for wire transfers prevent single-user mistakes from causing losses.
  9. Help desk must use out-of-band identity proofing before resets to stop impersonation.
  10. Detection hinges on anomalies: odd timing, payment reroutes, risky mailbox rules, and impossible-travel logins.

TAKEAWAYS:

  1. Prioritize layered defenses because one convincing email can trigger massive financial loss.
  2. Build “verify before you pay” procedures into finance and vendor-management workflows.
  3. Monitor identity and mailbox behaviors continuously to catch takeovers early.
  4. Maintain a rapid BEC playbook: recall funds, secure accounts, preserve logs, investigate endpoints.
  5. Combine ITDR, awareness training, and EDR for prevention, detection, and containment across the attack chain.

It’s time to rethink CISO reporting lines

Source: It’s time to rethink CISO reporting lines | CSO Online

Author: unknown

URL: https://www.csoonline.com/article/4136293/its-time-to-rethink-ciso-reporting-lines.html

ONE SENTENCE SUMMARY:

Most CISOs still report to IT, risking conflicts of interest; influence, independence, and emerging digital-risk models may reshape governance.

MAIN POINTS:

  1. Benchmark data shows 64% of CISOs report into IT, mainly CIO/CTO.
  2. Only 11% of CISOs report directly to the CEO, limiting executive independence.
  3. Smaller shares report to CFO, CRO, legal counsel, or other business roles.
  4. Reporting lines are slowly shifting, with dotted-line influence sometimes outweighing hierarchy.
  5. Security under CIO perpetuates a legacy view of cybersecurity as technical, not enterprise risk.
  6. Incentives clash: CIOs optimize efficiency while CISOs advocate spending to reduce risk.
  7. Availability goals can conflict with patching and downtime required for secure operations.
  8. IT delivery incentives can starve security resourcing for privacy-by-design and secure projects.
  9. Moving reporting to legal or finance may weaken essential alignment between CISO and IT execution.
  10. Analysts argue IT reporting is a governance anti-pattern that filters risk and weakens escalation.

TAKEAWAYS:

  1. Prioritize CISO independence to ensure unfiltered risk visibility and board-level accountability.
  2. Align incentives so security decisions reflect risk appetite, not IT cost or delivery metrics.
  3. Ensure CISOs are involved early and empowered, regardless of formal org chart placement.
  4. Expect regulators to scrutinize reporting structures, especially in heavily regulated sectors.
  5. Consider CDRO-style models treating digital risk as a board-level domain beyond IT.

Identity Prioritization isn’t a Backlog Problem – It’s a Risk Math Problem

Source: The Hacker News

Author: info@thehackernews.com (The Hacker News)

URL: https://thehackernews.com/2026/02/identity-prioritization-isnt-backlog.html

ONE SENTENCE SUMMARY:

Prioritize identity work by contextual exposure—controls, hygiene, business impact, and intent—focusing on toxic combinations that drive nonlinear breach risk today.

MAIN POINTS:

  1. Traditional ticket-style prioritization fails in environments with many non-human, unonboarded identities.
  2. Identity risk emerges from combined control posture, hygiene, business context, and intent.
  3. Controls should be treated as risk signals, not binary configured/not configured checkboxes.
  4. Authentication and session protections meaningfully change exposure for sensitive identities.
  5. Credential and secret management failures amplify compromise likelihood and persistence.
  6. Authorization, auditing, and secure SSO flow handling reduce lateral movement opportunities.
  7. Hygiene gaps like local, orphan, dormant, and unmanaged NHI accounts create systemic weakness.
  8. Business criticality, data sensitivity, and trust-path blast radius determine real-world impact.
  9. Intent signals identify active misuse even when credentials and access look legitimate.
  10. Nonlinear “toxic combinations” demand urgent remediation over numerous low-context findings.

TAKEAWAYS:

  1. Shift focus from closing findings to shrinking the exposure surface across trust paths.
  2. Weigh missing MFA differently for privileged, business-critical identities than low-impact accounts.
  3. Treat ownership and lifecycle clarity as core security controls for both humans and NHIs.
  4. Elevate incidents when anomalous activity appears alongside weak controls or poor hygiene.
  5. Use contextual scoring to sequence remediation where one fix removes multiple chained risks.

ChatGPT in your inbox? Investigating Entra apps that request unexpected permissions

Source: The Red Canary Blog: Information Security Insights

Author: Matt Graeber

URL: https://redcanary.com/blog/threat-detection/entra-id-oauth-attacks/

ONE SENTENCE SUMMARY:

Red Canary models an Entra ID OAuth consent attack using ChatGPT, outlining investigative questions, required AuditLogs, and remediation strategies.

MAIN POINTS:

  1. Threat research pivots from observed OAuth attacks to anticipate evolving adversary techniques.
  2. Hypothetical Entra ID scenario uses ChatGPT to gain Microsoft Graph email access.
  3. A non-admin user consented to Mail.Read, offline_access, profile, and openid permissions.
  4. The event includes precise timestamp, tenant, user, app IDs, and source IP.
  5. ChatGPT service principal matched the legitimate OpenAI application, not an impersonator.
  6. Mail.Read is highlighted as a frequently abused permission prompting investigation.
  7. Investigation aims to confirm user intent and possible coercion into granting consent.
  8. Authorization questions assess whether email-reading access is appropriate for the app.
  9. Tenant governance concerns include whether the application is sanctioned internally.
  10. Correlated Log Analytics AuditLogs required: “Consent to application” and “Add service principal.”

TAKEAWAYS:

  1. Treat high-impact OAuth permissions like Mail.Read as investigation triggers even for known apps.
  2. Validate application authenticity and publisher identity to detect lookalike OAuth abuse.
  3. Determine user intent and potential social engineering behind non-admin consent actions.
  4. Use CorrelationId to link consent events with service principal creation for complete timelines.
  5. Enforce tenant sanctioning and approval workflows to reduce risky third-party OAuth access.

Building a Detection Foundation: Part 1 – The Single-Source Problem

Source: TrustedSec

Author: Carlos Perez

URL: https://trustedsec.com/blog/building-a-detection-foundation-part-1-the-single-source-problem

ONE SENTENCE SUMMARY:

Incident response experience reveals a recurring pattern: organizations overtrust “telemetry” that proves incomplete, misleading, and insufficient under pressure.

MAIN POINTS:

  1. Field observations from incident response highlight consistent failures in security visibility.
  2. Tabletop exercises repeatedly expose gaps between perceived and actual monitoring coverage.
  3. Collected telemetry often looks comprehensive until real attackers stress it.
  4. Hidden assumptions about logging create blind spots during investigations.
  5. Detection confidence frequently exceeds evidence quality and completeness.
  6. Operational reality shows some critical events are never captured or retained.
  7. Response teams commonly discover missing context when reconstructing timelines.
  8. Measurement of security posture is skewed by unvalidated data sources.
  9. Overreliance on dashboards can mask telemetry brittleness and collection failures.
  10. Patterns across cases suggest telemetry programs need continuous verification, not faith.

TAKEAWAYS:

  1. Validate monitoring with realistic exercises rather than trusting tool outputs.
  2. Prioritize completeness, integrity, and retention of logs for investigatory usefulness.
  3. Challenge assumptions about what is actually being captured across environments.
  4. Use incident learnings to iteratively harden telemetry collection and coverage.
  5. Treat visibility as an engineering problem requiring testing, maintenance, and accountability.

Why Your Perimeter is a Lie and Your Data is the Real Battlefield

Source: CISO Tradecraft® Newsletter

Author: CISO Tradecraft

URL: https://cisotradecraft.substack.com/p/why-your-perimeter-is-a-lie-and-your

ONE SENTENCE SUMMARY:

Security must shift from perimeter tools to continuous, data-centric visibility, governance, and masking to withstand AI-accelerated threats.

MAIN POINTS:

  1. Perimeter-focused “outside-in” defenses fail when attackers move at AI speed.
  2. Data-centric protection treats sensitive information as the primary asset needing direct safeguards.
  3. “Radio Shacking” infrastructure fragments data across clouds, SaaS, and ad-hoc storage choices.
  4. Data sprawl creates too many owners, weak oversight, and inconsistent accountability.
  5. Shared responsibility means cloud providers secure uptime, while customers alone secure their data.
  6. Data discovery is never finished; it must continuously re-identify sensitive data everywhere.
  7. Effective discovery targets content across structured, unstructured, and messaging channels.
  8. Test and QA environments commonly expose unencrypted backups and real sensitive test datasets.
  9. Masking and obfuscation “neuter” non-production data, reducing breach impact and compliance scope.
  10. AI amplifies outcomes; poor permissions and hygiene make mistakes faster and more damaging.

TAKEAWAYS:

  1. Spend initial CISO effort on mapping data locations and access before buying “silver bullet” tools.
  2. Treat stale, ownerless data as high-risk and prioritize deletion alongside protection.
  3. Automate detection of over-permissioned files to shrink organizational blast radius quickly.
  4. Replace real customer data in dev/test with masked equivalents to eliminate “dirty secret” exposure.
  5. Monitor and protect data flows through APIs and partners, not only data stored at rest.

Anthropic rolls out embedded security scanning for Claude 

Source: CyberScoop

Author: djohnson

URL: https://cyberscoop.com/anthropic-claude-code-security-automated-security-review/

ONE SENTENCE SUMMARY:

Anthropic launched Claude Code Security to AI-scan owned codebases, verify findings, rate severity, and suggest patches for faster vulnerability remediation.

MAIN POINTS:

  1. Claude Code Security scans software repositories for vulnerabilities and proposes patch solutions.
  2. Initial rollout targets a limited set of enterprise and team customers.
  3. Internal red teams stress-tested it via Capture the Flag competitions for over a year.
  4. Pacific Northwest National Laboratory helped refine scanning accuracy.
  5. Anthropic expects AI will scan a significant share of global code soon.
  6. Automated scanning demand may outpace manual reviews as “vibe coding” spreads.
  7. Tool aims to reduce security review effort to a few clicks, with user-approved changes.
  8. Model analyzes component interactions and traces data flow beyond traditional static analysis.
  9. Multi-stage self-verification attempts to disprove findings and filter false positives.
  10. Access requires scanning only code the company owns and has rights to assess.

TAKEAWAYS:

  1. AI-assisted vulnerability detection is becoming central to modern software security workflows.
  2. Verification steps and severity ratings are critical for prioritizing remediation at scale.
  3. Embedded scanning could materially cut review time while keeping humans in approval loops.
  4. Human expertise remains necessary for higher-level threats despite improved model capability.
  5. Clear usage restrictions address legal and ethical risks around scanning third-party code.

Dynamic Objects in Active Directory: The Stealthy Threat

Source: Tenable Blog

Author: Antoine Cauchois

URL: https://www.tenable.com/blog/active-directory-dynamic-objects-stealthy-threat

ONE SENTENCE SUMMARY:

Active Directory dynamic objects enable stealthy attacks by self-deleting without tombstones, leaving only confusing artifacts and requiring real-time detection.

MAIN POINTS:

  1. Dynamic objects use a TTL timer to self-destruct via the AD garbage collector.
  2. Expired dynamic objects bypass recycle bin and tombstones, eliminating directory-side forensic metadata.
  3. Deletion timing may lag up to 15 minutes, briefly enabling live inspection opportunities.
  4. entryTTL and msDS-Entry-Time-To-Die jointly represent countdown and absolute expiration.
  5. TTL limits are governed by msDS-Other-Settings, including minimum and default lifetimes.
  6. Attackers can evade MAQ evidence by creating self-deleting dynamic computer accounts.
  7. primaryGroupID can reference a dynamic group, yielding invisible membership and later corruption.
  8. Orphan SIDs persist in ACLs, including AdminSDHolder, polluting Tier-0 permissions visibility.
  9. Dynamic GPOs can execute via malicious gPCFileSysPath, then vanish leaving broken gPLink traces.
  10. Entra Connect may miss dynamic deletions, leaving orphaned, functional cloud users indefinitely.

TAKEAWAYS:

  1. Favor in-flight detection over post-mortems because directory evidence can fully disappear.
  2. Monitor and alert on creation of objects with entryTTL or msDS-Entry-Time-To-Die set.
  3. Reduce attack surface by setting ms-DS-MachineAccountQuota to zero where feasible.
  4. Hunt for inconsistencies: unresolved SIDs, broken gPLinks, corrupted primaryGroupID references.
  5. Validate hybrid identity hygiene by detecting and remediating Entra ID orphans from dynamic objects.

How Security Tool Misuse Is Reshaping Cloud Compromise

Source: Qualys Security Blog

Author: Sayali Warekar

URL: https://blog.qualys.com/qualys-insights/2026/02/19/how-security-tool-misuse-is-reshaping-cloud-compromise

ONE SENTENCE SUMMARY:

Attackers repurpose secret-scanning tools to find, validate, enumerate, and exploit cloud credentials; strong lifecycle governance and telemetry-based detection reduce impact.

MAIN POINTS:

  1. Real-world campaigns operationalize TruffleHog to harvest exposed cloud credentials at scale.
  2. Cloud compromises increasingly rely on authentication misuse rather than vulnerability exploitation chains.
  3. Typical attack sequence: secret discovery, API validation, permission enumeration, then data access.
  4. Long-lived access keys plus IAM misconfigurations enable rapid escalation and exfiltration.
  5. AWS validation commonly uses sts:GetCallerIdentity to confirm credentials are active.
  6. Post-validation actions become procedural: map policies, probe services, and expand within permission scope.
  7. Telemetry like CloudTrail reveals recognizable call patterns beyond simple tool signatures.
  8. User-agent strings showing “TruffleHog” can aid investigations but are not sufficient alone.
  9. Supply-chain attacks implanted secret harvesting into NPM ecosystems, spreading via trusted APIs.
  10. Governance improvements focus on reducing secret sprawl and enforcing least-privilege identity boundaries.

TAKEAWAYS:

  1. Treat exposed active secrets as immediate access, not merely hygiene debt.
  2. Correlate identity validation and rapid permission enumeration to detect credential misuse early.
  3. Replace static keys with short-lived, role-based access to shrink attacker dwell time.
  4. Harden development pipelines because supply-chain propagation can automate credential harvesting.
  5. Continuous scanning, rotation, and protected audit logging materially limit blast radius and response gaps.

Why Zero Trust Needs to Start at the Session Layer

Source: Cloud Security Alliance

Author: unknown

URL: https://cloudsecurityalliance.org/articles/why-zero-trust-needs-to-start-at-the-session-layer

ONE SENTENCE SUMMARY:

NHP applies Zero Trust at session layer, hiding infrastructure until authenticated, sharply reducing reconnaissance, exploitation, DDoS, and AI-driven attacks.

MAIN POINTS:

  1. Traditional security assumes exposed networks, focusing on encryption, hardening, detection, and response.
  2. TCP/IP’s default visibility enables scanning, probing, and exploitation at machine speed.
  3. Shifting strategy asks to prevent unauthenticated systems from seeing targets at all.
  4. NHP enforces deny-all and authenticate-before-connect at OSI Layer 5.
  5. Application-layer Zero Trust doesn’t stop connection attempts against exposed services.
  6. Pre-auth exposure enables fingerprinting, credential attacks, exploits, and resource exhaustion.
  7. AI offensive tooling increases speed, scale, adaptiveness, and autonomous exploitation.
  8. Third-generation hiding evolves beyond port knocking and Single-Packet Authorization.
  9. Workflow uses NHP-KNK, ASP authorization, NHP-AOP to NHP-AC, then NHP-ACK details.
  10. DNS can be tied to authenticated handshakes, making domains non-resolvable before approval.

TAKEAWAYS:

  1. Session-layer invisibility reduces attack surface more reliably than faster reactive detection.
  2. Zero-days become harder to exploit when services cannot be reached pre-authentication.
  3. Authenticated/encrypted DNS resolution can prevent infrastructure enumeration and DNS abuses.
  4. Reconnaissance suppression lowers alert fatigue and reduces DDoS susceptibility.
  5. Complementary post-auth controls and careful key/availability operations remain necessary.

Dark web monitoring: Common gaps and how to close them

Source: Feedly Blog

Author: Mary D’Angelo

URL: https://feedly.com/ti-essentials/posts/dark-web-monitoring-common-gaps-and-how-to-close-them

ONE SENTENCE SUMMARY:

Effective deep and dark web monitoring requires playbooks, governance, and TIP-ready structured data to reduce noise and enable decisions.

MAIN POINTS:

  1. Structure, not access, determines whether DDW monitoring scales and delivers value.
  2. Overreaction and disengagement both stem from noisy collection without disciplined workflows.
  3. Define DDW as unindexed criminal forums, marketplaces, leak sites, dumps, and private communities.
  4. Establish a breach-claim playbook before incidents to ensure consistent, rapid response.
  5. Capture evidence with full context, metadata, and safe handling of samples.
  6. Identify actors as TIP entities, recording handle history, reputation, and cross-references.
  7. Correlate claims across platforms and feeds to detect recycled data and coordinated posting.
  8. Evaluate credibility using structured skepticism and verifiable sample alignment with internal data.
  9. Implement governance via collection policy and SOPs, including OpSec and artifact storage rules.
  10. Normalize DDW findings into a STIX-aligned data model for queryable TIP ingestion and relationships.

TAKEAWAYS:

  1. Playbooks turn breach and extortion claims into routine, auditable processes instead of panic.
  2. Governance answers legal, leadership, and operational risk questions before they become issues.
  3. Evidence integrity improves with screenshots, PDFs, hashes, metadata templates, and source attribution.
  4. Hybrid collection works best: vendors for breadth, analysts for depth and validation.
  5. Expanding coverage to chat platforms like Telegram closes major modern DDW visibility gaps.

CCM v4.1 Transition Timeline

Source: Cloud Security Alliance

Author: unknown

URL: https://cloudsecurityalliance.org/articles/ccm-v4-1-transition-timeline

ONE SENTENCE SUMMARY:

CSA’s CCM v4.1 updates cloud security controls and artifacts, adds transition timelines for STAR programs, and maintains CCSK unchanged.

MAIN POINTS:

  1. Released January 28, CCM v4.1 replaces CCM v4.0.13 with expanded coverage.
  2. Introduced 11 new control specifications across DCS, LOG, SEF, STA, and TVM.
  3. Removed one control from the Identity and Access Management (IAM) domain.
  4. Enhanced existing control objectives through minor and major revisions for stronger risk alignment.
  5. Refined control language to improve clarity, consistency, interpretability, and auditability.
  6. Updated CAIQ v4.1 includes 283 questions aligned to CCM v4.1 controls.
  7. Published refreshed Implementation and Auditing Guidelines alongside the CCM v4.1 release.
  8. Updated CCM-Lite v4.1 provides baseline controls for all cloud service providers.
  9. Released CAIQ-Lite for simplified, efficient vendor assessments based on the full CAIQ.
  10. Collaborating to update and expand mappings from CCM v4.0.13 to CCM v4.1.

TAKEAWAYS:

  1. Organizations should plan migration now because STAR programs will ultimately require CCM/CAIQ v4.1.
  2. STAR Registry accepts both versions until December 2027, then only v4.1 for new submissions.
  3. Existing STAR registry services get a two-year transition window after December 2027.
  4. STAR Level 2 attestation and certification will adopt v4.1, despite temporary dual acceptance.
  5. CCSK curriculum and exam remain unaffected by the CCM v4.1 release for now.

Hackers target Microsoft Entra accounts in device code vishing attacks

Source: BleepingComputer

Author: Bill Toulas

URL: https://www.bleepingcomputer.com/news/security/hackers-target-microsoft-entra-accounts-in-device-code-vishing-attacks/

ONE SENTENCE SUMMARY:

Threat actors abuse Microsoft OAuth device-code flow with vishing and phishing to obtain tokens, bypass MFA, and access Entra-linked SaaS data.

MAIN POINTS:

  1. Campaigns target technology, manufacturing, and financial organizations via device-code phishing plus vishing.
  2. Attacks abuse OAuth 2.0 Device Authorization flow rather than deploying malicious OAuth apps.
  3. Legitimate Microsoft OAuth client IDs are leveraged to increase victim trust.
  4. Victims are coached to enter a user code at microsoft.com/devicelogin.
  5. Users complete normal login and MFA, unknowingly authorizing an OAuth application.
  6. Attackers exchange device codes for refresh tokens, then mint access tokens.
  7. Obtained tokens enable access without re-prompting MFA after initial authorization.
  8. Compromise extends to SSO-connected SaaS like Microsoft 365, Salesforce, Slack, and others.
  9. ShinyHunters is suspected and reportedly confirmed involvement, though independent confirmation lacking.
  10. Defensive guidance includes disabling device code flow, auditing consents, and reviewing sign-in logs.

TAKEAWAYS:

  1. Device-code flow turns user-approved MFA into attacker-controlled token issuance.
  2. Using Microsoft-branded OAuth apps and pages reduces typical phishing detection cues.
  3. Refresh tokens are the critical prize; they enable durable, MFA-free session access.
  4. Monitoring for device-code authentication events can reveal intrusions earlier.
  5. Least-use features like device-code login should be disabled unless operationally required.