Practice with national-level exam (FACT, FACT Plus, NET, CUET, etc.) mocks, learn from structured notes, and get your doubts solved in one place.
Timed practice tests with instant scoring and per-question explanations.
This mock drills into the two hardest acquisition surfaces in modern digital forensics — public-cloud workloads and Internet-of-Things devices — and the legal, architectural, and procedural obstacles that distinguish them from traditional disk forensics. Thirty hard questions across cloud service models (IaaS, PaaS, SaaS, FaaS) and what each layer surrenders to the investigator, deployment models (public, private, community, hybrid), multi-tenancy and data co-mingling, jurisdictional pathways for cross-border production (MLAT, the US CLOUD Act 2018, GDPR Article 48, India's DPDP Act 2023, IT Act §69 read with the 2009 Rules, the CERT-In Directions of 28 April 2022 with their 6-hour reporting and 180-day log-retention rules), the major cloud audit logs (AWS CloudTrail vs CloudWatch vs Config vs VPC Flow Logs, Azure Activity Log vs Entra ID Sign-in Logs vs Diagnostic Logs, GCP Cloud Audit Logs Admin Activity vs Data Access, Microsoft 365 Unified Audit Log retention by SKU), snapshot-based acquisition (EBS snapshot → cross-account share → forensic VPC restore), Linux memory acquisition with LiME, and the limits of memory acquisition on serverless platforms. The IoT half covers smart-hub voice assistants and the Echo cloud-account architecture exposed by *Arkansas v. Bates* (2017), wearables and the heart-rate / step-count timeline that proved decisive in *State v. Dabate* (Connecticut, 2017), smart-camera and doorbell acquisition when JTAG is gone and the eMMC is BGA-soldered (chip-off plus companion-app plus cloud), Android and iOS companion-app forensic artefacts (SQLite, SharedPreferences, plist, OAuth tokens), connected-vehicle Event Data Recorders extracted with the Bosch CDR tool over OBD-II under 49 CFR Part 563, and the special discipline required for industrial-control SCADA networks running Modbus and OPC-UA where active scanning can disrupt physical-world processes (IEC 62443). It is pitched at MSc and final-year BSc cyber forensics students at NFSU, LNJN-NICFS, and other Indian universities, and at FACT, UGC-NET and CHFI aspirants who need the cloud and IoT acquisition layers locked in. This is a **premium**, **hard**-difficulty mock — distractors target the misconceptions a careful student is most likely to fall into (CloudTrail vs CloudWatch vs Config; Lambda vs EC2 acquisition; MLAT vs CLOUD Act vs GDPR Article 48; Azure Activity Log vs Entra Sign-in Logs; chip-off vs JTAG when neither is straightforward). Themes covered: - Cloud service models (IaaS / PaaS / SaaS / FaaS) and the evidence each layer yields - Cloud deployment models (public, private, community, hybrid) and multi-tenancy - AWS CloudTrail, CloudWatch, Config, VPC Flow Logs; Azure Entra Sign-in / Activity / Diagnostic Logs; GCP Audit Logs Admin Activity vs Data Access; M365 Unified Audit Log - Snapshot acquisition (EBS / managed disk / persistent disk); Linux RAM with LiME; serverless limits - Jurisdiction: MLAT, CLOUD Act 2018, GDPR Article 48, DPDP 2023, IT Act §69, CERT-In Directions 2022, data sovereignty - Standards: NIST SP 800-145, NIST IR 8006, NIST SP 800-201, NIST SP 800-86, ISO/IEC 27037, CSA Domain 12, IEC 62443 - IoT classes: voice assistants (Echo / Home / HomePod), wearables (Fitbit, Apple Watch, Garmin), smart cameras (Ring, Nest), connected vehicles, industrial IoT - IoT acquisition: chip-off vs JTAG, companion-app SQLite/SharedPreferences/plist, cloud-account artefacts - Court precedents: *Arkansas v. Bates* (Echo, 2017), *State v. Dabate* (Fitbit, 2017) - Connected-vehicle CAN-bus, OBD-II, EDR under 49 CFR Part 563, Bosch CDR tool Each question carries a detailed 250+ word explanation citing primary sources — NIST IR 8006 and SP 800-201, NIST SP 800-145, ISO/IEC 27037, the CLOUD Act, GDPR, DPDP 2023, the IT Act, CERT-In Directions, AWS / Azure / GCP / Microsoft official documentation, the *Bates* and *Dabate* dockets, 49 CFR Part 563, ISO 15765-4, IEC 62443, and Hassan's *Digital Forensics Basics*. Allow 15 minutes — the explanations are long enough to use as study notes by themselves.
This mock covers Network Forensics as it is actually practised — reading packet captures, parsing logs, and reconstructing what happened on the wire. Thirty medium-difficulty questions across the eight pillars a network-forensic analyst (and any FACT or NFSU MSc Cyber Forensics aspirant) must lock in: packet-capture fundamentals (tcpdump and dumpcap snaplen, BPF capture filters versus Wireshark display filters, ring buffers for continuous capture, libpcap versus PCAP-NG), the TCP/IP stack as a forensic timeline (Ethernet framing, IPv4 TTL and fragmentation, TCP flags including the FIN-versus-RST distinction, the three-way handshake, sequence numbers, and retransmissions), per-protocol artefacts (HTTP request headers, the cleartext SNI in the TLS ClientHello, DNS record types and exfiltration patterns, the SMTP envelope, FTP active versus passive, SMB on port 445, the SSH banner), flow telemetry versus full PCAP (NetFlow/IPFIX, sFlow sampling), intrusion detection (Snort/Suricata rule anatomy, Zeek protocol logs, MITRE ATT&CK lateral-movement techniques), web and proxy logs (Apache Common Log Format, IIS W3C Extended Log Format with its UTC time field), timestamp normalisation across UTC/IST/NTP-drifted endpoints, attacker techniques visible in packets (SYN scans, DNS tunnelling, JA3/JA3S TLS fingerprinting), and the Indian regulatory layer (IT Act sections 69 and 69B with the CERT-In Directions of 28 April 2022 mandating 180-day log retention within Indian jurisdiction). It is pitched at MSc Cyber Forensics students at NFSU, LNJN-NICFS, and other Indian universities, and at FACT, UGC-NET, and entry-level SOC analyst aspirants who need the network-forensics layer locked in before tackling deeper malware-traffic analysis, encrypted-payload reconstruction, and case studies. The questions assume you already know the basics of digital forensics; the medium-difficulty bar is set so that a careful read of an explanation closes the gap if you got the question wrong. Themes covered: - Packet capture: tcpdump/Wireshark/dumpcap, BPF filter syntax, ring buffers, libpcap vs PCAP-NG - TCP/IP stack: Ethernet, IPv4 TTL/fragmentation, TCP flags, three-way handshake, retransmissions - Protocol artefacts: HTTP, HTTPS ClientHello SNI, DNS records and tunnelling, SMTP, FTP active/passive, SMB, SSH - Flow telemetry: NetFlow/IPFIX vs full PCAP, sFlow sampling - Intrusion detection: Snort/Suricata rule anatomy, Zeek protocol logs, MITRE ATT&CK lateral movement - Web/proxy logs: Apache CLF, IIS W3C Extended, NTP and UTC timestamp normalisation - Attacker techniques in packets: SYN scans, DNS tunnelling, JA3/JA3S TLS fingerprints - Indian context: IT Act sections 69 and 69B, CERT-In Directions of 28 April 2022 (180-day log retention) Each question carries a detailed 220+ word explanation citing primary sources — Davidoff and Ham’s *Network Forensics*, the relevant RFCs (791, 959, 1035, 4253, 5321, 6066, 7011, 9293), NIST SP 800-86, the Wireshark and Snort documentation, MITRE ATT&CK, and the IT Act with the CERT-In Directions. Allow 15 minutes; the explanations are long enough to use as study notes by themselves.
This mock covers the foundations of Computer Forensics as set out in the FACT exam syllabus (Section B, Elective III, sub-section 1 — Computer Forensics). Thirty questions across the nine pillars a first-year MSc Cyber Forensics student must lock in before tackling case law, Windows-internals deep-dives, malware analysis, and reconstruction: computer hardware seen through a forensic lens (motherboard chipset, RAM volatility, HDD vs SSD, the CPU at the top of the order of volatility), the modern boot process (BIOS vs UEFI, MBR vs GPT, systemd as PID 1 on Linux), file-system fundamentals (NTFS journaling, FAT32's 4 GiB cap, ext4 extents and crtime), first-responder principles (RFC 3227 order of volatility, write blockers, volatile vs non-volatile classification), imaging and hashing (E01 vs raw dd, MD5 collisions vs SHA-256, hex digest lengths), search-and-seizure under post-2024 Indian law (BNSS replacing CrPC, IT Act 2000 sections 65/66/66A/66B with the Shreya Singhal strike-down), Windows artefacts (Registry hives and USBSTOR, Prefetch, the $I/$R Recycle Bin pair, the USN Journal), Linux artefacts (~/.bash_history, /var/log/, dot-file convention), and recovery techniques for deleted, hidden, and altered files (carving, slack space, NTFS Alternate Data Streams, what "delete" actually does). It is pitched at BSc and first-year MSc cyber forensics students at NFSU, LNJN-NICFS, and other Indian universities, and at FACT and UGC-NET aspirants who need the Computer-Forensics foundations locked in. This sits at the introductory tier — vocabulary, definitions, and the most-asked concepts that anchor every later paper. It is **not** a duplicate of Mock #1 (which covers digital-forensics vocabulary across the whole field) — this mock drills specifically into Computer Forensics as a sub-discipline. Themes covered: - Computer hardware from a forensic angle: motherboard chipset, RAM volatility, HDD vs SSD with TRIM, the CPU at the top of the order of volatility - Boot process and firmware: BIOS vs UEFI, MBR vs GPT, Linux systemd as PID 1 - File-system fundamentals: NTFS journaling, FAT32 4 GiB cap, ext4 extents and crtime - First-responder principles: RFC 3227 order of volatility, hardware write blockers, volatile vs non-volatile - Imaging and hashing: E01 vs raw dd, MD5 vs SHA-256, hex digest lengths - Search and seizure under Indian law: BNSS 2023 (replacing CrPC), IT Act sections 65 / 66 / 66A / 66B with the 2015 Shreya Singhal strike-down - Windows artefacts: Registry hives, Prefetch, $I/$R Recycle Bin pair, USN Journal - Linux artefacts: ~/.bash_history, /var/log/, the dot-file hidden convention - Recovery of deleted/hidden/altered files: file carving, slack space, NTFS Alternate Data Streams Each question carries a detailed 220+ word explanation citing standard references (Carrier's File System Forensic Analysis, Casey's Digital Evidence and Computer Crime, Carvey on Windows Registry forensics, RFC 3227, NIST SP 800-86 and 800-88, NIST FIPS PUB 180-4, the IT Act 2000, the BNSS 2023, the Shreya Singhal judgment, and Microsoft / Linux kernel documentation). Allow 15 minutes; the explanations are long enough to use as study notes by themselves. If you can pass this mock comfortably, you have the Computer-Forensics vocabulary that the application-level mocks (#3 Windows artefacts, #4 mobile acquisition, #5 email forensics) build on.
This mock covers mobile device forensics — acquisition strategies, the iOS and Android security architectures that determine what you can extract, the vendor tools used in Indian forensic labs, and the anti-forensics tactics suspects routinely use. Thirty questions cover logical, file-system, physical, JTAG and chip-off acquisition; BFU vs AFU device state; the iOS Secure Enclave, Effaceable Storage, and Class A/B/C/D file protection; Checkm8, GrayKey, Cellebrite UFED and Premium; Android File-Based Encryption, Direct Boot and Verified Boot; SIM card structure (ICCID, IMSI, MSISDN, ADN, LDN, EF_SMS); SQLite WAL and freelist forensics; vault apps, app cloning, and disappearing-message platforms. It is pitched at MSc cyber forensics students at NFSU and LNJN-NICFS, certified examiner candidates (CHFI Mobile, CCO, CCPA), state-FSL trainees, and FACT aspirants who need the mobile section locked in. Mobile forensics has overtaken disk forensics as the highest-volume work in Indian forensic labs since 2020 — most cyber-crime cells now process more phones than computers, and the iOS / Android security architectures keep evolving fast enough that mock content needs to stay current with each iOS major release. Themes covered: - The acquisition hierarchy: logical → file-system → physical → JTAG → chip-off - BFU vs AFU and the BFU-lockout problem during transport - iOS Secure Enclave, Effaceable Storage, NSFileProtection class A/B/C/D - Checkm8 (A5–A11), GrayKey, Cellebrite UFED and Premium — what each can and cannot do - Android File-Based Encryption (FBE), Direct Boot, Verified Boot (AVB 2.0) - SIM card forensics: ICCID, IMSI vs IMEI, ADN / LDN / EF_SMS, SIM-side recoverable SMS - Faraday isolation and the BFU-lockout battery problem - SQLite forensics: WAL files, freelist carving, journal modes - Anti-forensics: vault apps (Calculator+, AppLock, Parallel Space), app cloning (Samsung Dual Messenger, Xiaomi App Twin, Island), disappearing messages, encrypted wipe - App-specific artefacts: WhatsApp msgstore.db / msgstore.db.crypt14/15, Telegram Secret Chats vs Cloud Chats - Cloud forensics: iCloud, Google Account, messenger cloud backups Each question carries a detailed explanation citing NIST SP 800-101 Rev 1, Apple Platform Security Guide, the Android Open Source Project documentation, vendor knowledge bases (Cellebrite, Magnet, Grayshift), Mahalik et al. Practical Mobile Forensics, and INTERPOL guidelines. Allow 15 minutes; some questions require knowledge of vendor tooling, others require iOS / Android internals. The explanations are long enough to use as study notes by themselves.
This mock covers email forensics — header analysis, sender authentication (SPF, DKIM, DMARC), spoofing techniques and how to detect them, phishing investigation, business email compromise (BEC), and the legal framework for email-based offences in India. Thirty questions test what every header field means and how to read it, how SPF / DKIM / DMARC verdicts appear in Authentication-Results, the difference between display-name spoofing and full envelope forgery, how to trace a phishing campaign back to its kit and infrastructure, attachment forensics (MIME, Base64, hash matching to MITRE ATT&CK and VirusTotal), and the prosecution handles under IT Act Sections 66C, 66D and BNS Section 318. It is pitched at BSc and first-year MSc cyber forensics students, FACT and UGC-NET aspirants, and incident-response analysts at Indian SOCs and CERT-In-affiliated teams. Email is the single largest entry vector for cyber-crime complaints registered on the National Cyber Crime Reporting Portal; every cyber-crime cell sees dozens of email cases per week, which makes mastering this area one of the highest-leverage investments for any cyber forensics student. Themes covered: - Email header anatomy: Received, Message-ID, Return-Path, Reply-To, From, Date, X-Originating-IP - SMTP / IMAP / POP3 — what each protocol does, the standard ports, and what trace each leaves - SPF (RFC 7208), DKIM (RFC 6376), DMARC (RFC 7489), ARC (RFC 8617), BIMI - Header spoofing vs envelope spoofing; how From and Return-Path can disagree - Display-name attacks, IDN homograph attacks vs ASCII typosquats, lookalike-domain detection - Phishing kit fingerprinting and OSINT pivots from a phishing URL (WHOIS, DNS, ASN, crt.sh) - Attachment forensics: MIME structure, Base64, hash-to-malware-family lookup - Email storage formats: EML, MSG, PST, OST, MBOX — what each is and how to parse - Indian legal handle: IT Act Sections 66C (identity theft), 66D (cheating by personation), BNS Section 318 - Operational response: 1930 helpline, cybercrime.gov.in, the CFCFRMS golden-hour fund-hold mechanism Each question carries a detailed explanation citing the relevant RFC verbatim, NIST SP 800-86 for incident-response procedure, MITRE ATT&CK for technique mappings, the IT Act for the Indian legal handle, and Microsoft / Google admin documentation for header behaviour. Allow 15 minutes; the explanations are long enough to use as study notes by themselves.
This mock covers Windows forensic artefacts in depth — the Registry hives, the Event Log infrastructure, Prefetch, ShimCache, Amcache, USB device tracking, ShellBags, UserAssist, RecentDocs, Jump Lists, LNK files, the NTFS journal files, Volume Shadow Copies, SRUM and the Recycle Bin. Thirty questions test what each artefact records, where it lives on disk, what tool to use to parse it, and what real-investigation question each artefact answers — who logged in, what programs ran, what USB drives were inserted, what files were opened, what was the user doing at 14:32. It is pitched at BSc and MSc cyber forensics students at NFSU and similar Indian universities, certified examiner aspirants (CHFI, GCFE, GCFA), and FACT or state-FSL examinees who routinely face Windows-artefact questions. Windows desktop forensics is the largest single section in any practical cyber-forensics paper because the majority of seized devices in Indian cyber-crime cells are still Windows machines — knowing the artefacts cold is the difference between a competent examiner and a paper-only candidate. Themes covered: - Registry hive structure: SAM, SECURITY, SOFTWARE, SYSTEM, NTUSER.DAT, USRCLASS.DAT — and exactly where each file lives on disk - USB insertion artefacts: USBSTOR, MountedDevices, MountPoints2 and setupapi.dev.log - Persistence keys: Run, RunOnce, Services, Image File Execution Options - ShellBags, RecentDocs, UserAssist, Jump Lists, LNK files — what each tells you about user activity - Critical Event Log IDs: 4624, 4625, 4634, 4647, 4672, 4688, 4720, 7045, 1102 — what each means and when it fires - Prefetch (.pf): up to 8 last-run timestamps in Windows 10+, the eviction rules - ShimCache (AppCompatCache) and Amcache.hve — the differences and why both matter - $MFT, $LogFile, $UsnJrnl — the NTFS forensic trio - Volume Shadow Copies, SRUM (SRUDB.dat), the Recycle Bin ($I / $R files) - Tooling: Eric Zimmerman tools (Registry Explorer, RECmd, EvtxECmd, MFTECmd, AmcacheParser, JLECmd, LECmd, RBCmd), KAPE, Autopsy, Magnet AXIOM Each question carries a detailed explanation citing Microsoft Learn for the official artefact behaviour, Carvey Windows Registry Forensics and Windows Forensic Analysis Toolkit, Eric Zimmerman documentation, MITRE ATT&CK technique mappings, and Mandiant research where relevant. Allow 15 minutes — these are quick-recall questions that any practising examiner should answer in well under 30 seconds each. The explanations are long enough to use as study notes by themselves; even if you skip the timed run, reading through them once is a complete refresher.
This mock covers the foundational concepts and vocabulary every digital forensics student must know — the building blocks of every later course, every exam paper, and every real investigation. Thirty questions across storage and memory, the order of volatility, write blockers, forensic imaging, hashing for integrity, file systems (NTFS, ext4, APFS, FAT), chain of custody, first-responder procedures, Faraday bags, and the routine artefacts (Windows Registry, event logs, browser cache, email headers) that turn raw devices into evidence. It is pitched at BSc and first-year MSc cyber forensics students at NFSU, LNJN-NICFS and other Indian universities, and at FACT or UGC-NET aspirants who need the introductory layer locked in before tackling case law, tool-specific procedure, and reconstruction. If you can pass this mock comfortably, you have the vocabulary for every advanced cyber-forensics topic that follows. Themes covered: - Volatile vs non-volatile memory and the order of volatility (RFC 3227) - Write blockers and why they matter for evidence integrity - Forensic imaging, hashing (MD5, SHA-256), and the EnCase E01 format - Chain of custody — what it is, what breaks it - First-responder priorities and the Faraday-bag rule for mobile devices - File system fundamentals: NTFS, FAT, ext4, APFS — what each is used for - Slack space, unallocated space, and what deleted-file recovery actually does - The everyday artefacts: Windows Registry, event logs, browser cache, cookies, email headers - Mobile basics: IMEI vs IMSI, logical vs physical acquisition Each question has a detailed explanation citing the relevant RFC, NIST publication, vendor documentation or standard textbook (Carrier, Casey, Nelson). Allow 15 minutes when you take the timed version. The explanations are long enough to use as study notes by themselves; even if you skip the timed run, reading through them once is a complete refresher.