Franklin implements technical and procedural safeguards to uphold the integrity, reliability, and completeness of data processing. Automated alerts are configured to identify and surface deviations, incomplete validations, and critical system changes that may impact the accuracy or validity of the analysis. These controls are embedded within operational and development workflows to ensure early detection of potential issues and minimize manual oversight.
This article compiles all alerts, notifications, and built-in technical controls and safeguards that users should be aware of, providing clear visibility into the system’s checkpoints. It also describes how the platform behaves during security incidents and connectivity loss, how data is backed up and recovered, and how personal and sensitive data is handled throughout its lifecycle.
Built-In Platform Safeguards
Franklin includes a series of automated safeguards that operate at key points in the user workflow. These safeguards are designed to prevent data entry errors, enforce input consistency, alert users to potential issues, and ensure that critical actions are confirmed before execution.
Login Activity Monitoring
Login activity is monitored and alerts are generated for events such as failed login attempts, unexpected access times, or access from unrecognized locations. These alerts help ensure that only authorized personnel interact with the system.
Note: For detailed information about authentication methods, password policies, and role-based access, refer to the Access Control and Authentication article.
Search Field Input Validation
Alerts are triggered by the user’s input in case of an unsupported or incorrect format (e.g., missing genomic coordinates), preventing downstream errors by enforcing consistent input structure at the point of entry. Examples of validation alerts include:
Unsupported format: If the variant notation cannot be parsed by the system, a message is displayed: “Franklin couldn’t parse the provided variant, please check the format examples.”
Wrong reference: If the reference allele at the specified position does not match the provided input, the system displays: “Wrong ref. Ref at this location is [correct allele].”
Missing gene: If a transcript identifier is submitted without an associated gene, the alert “Search is missing a gene” is displayed.
These validations ensure that variant searches conform to the expected nomenclature and reference data before any downstream analysis is initiated.
User Validation of Transcript
When multiple transcript options are available for a given variant, users are required to actively select the clinically relevant transcript. A dialog box is presented with the message “Please Select the Variant You’re Referring to” and lists the available transcript-variant combinations (e.g., chr1:94564383 A>T, NM_000350). This safeguard ensures that variant classification and downstream interpretation are based on the most appropriate transcript for the clinical context.
Important: Selecting an incorrect transcript may lead to misclassification. Users should verify transcript selection against their organization’s clinical protocols.
Sample Sheet Upload and Validation
Users can upload a sample sheet in XLS, XLSX, or CSV format. If the upload fails (e.g., due to an internet outage), a notification will appear in the UI. In such cases, users should verify their internet connection and confirm that the file format is valid (XLS, XLSX, or CSV). If the issue persists, users should contact Franklin Support for further assistance.
Once the file is uploaded successfully, validation is performed on the sample sheet metadata—not the associated files. The validation outcomes are as follows:
No warnings or errors: The sample sheet is accepted, all cases are created, and a success message is shown in the UI.
Warnings only: The upload is completed and all cases are created. An email is sent to the uploading user detailing each warning, including the line, column, and data item involved (e.g., line 55 – ethnicity ‘test 123’ doesn’t exist). Users should review and correct the sample sheet accordingly.
Errors detected: The upload completes but no cases are created. All errors and warnings are shown in the UI. A detailed email is sent to the uploading user with the line, column, and data item causing the failure (e.g., line 55 – assay ‘TSO 501’ doesn’t exist). Users must update the sample sheet based on the error reasons.
Post-Validation Issues:
Once validation is completed, the system proceeds to create the samples and their linked cases. If an issue occurs during this stage, a generic error message may appear without specific details. Users should re-upload the sample sheet. If the issue persists, they should contact Franklin Support.
Cases that are successfully created will appear in the system with the status "Pending Samples".
Sample-to-File Linking
The sample-to-file linking process acts as a safeguard by ensuring that only complete and correctly named data files are associated with each sample before any downstream analysis begins.
During the sample sheet upload, users must specify the S3 path or the basespace project and samples where the sample files are stored and the sample name for each entry.
For S3, any files in the provided folder that are prefixed with the sample name are automatically linked to the corresponding sample.
For basespace, any files found in the biosample or the appsession are linked to the sample.
If the exact set of required files is known when defining a workflow, the expected file composition should be configured in the assay definition. Linking will not proceed until all required files are detected in the specified S3 path.
Once the sample sheet is successfully uploaded, the system begins checking for file availability immediately. Subsequent checks are performed every 5 minutes, ensuring timely linking once all requirements are fulfilled.
Missing Sample Warning
This alert prevents users from proceeding with incomplete data and ensures corrective action is taken before analysis or reporting begins, supporting traceability and diagnostic integrity. Cases with missing or unlinked samples will display the status “Pending Samples” in the case list.
Confirmation Prompts
Franklin includes confirmation prompts (“Are you sure?” pop-ups) in scenarios where user actions may result in irreversible changes or data loss. These prompts are triggered before critical operations such as deleting records, submitting finalized analyses, or navigating away from unsaved content. The purpose is to ensure that users explicitly acknowledge the consequences of their actions, particularly when changes cannot be undone or when unsaved edits will be discarded. This mechanism helps prevent accidental data modification and reinforces informed decision-making.
Environment Mismatch Alert on Login
To prevent data access issues and ensure proper environment segregation, an alert is shown when users log into the community environment instead of their assigned organizational environment. The dialog prompts: “Are You in the Right Franklin Environment?” and provides a direct link to the correct environment (e.g., enterprise.genoox.com) with a “Redirect Me” button. This safeguard prevents users from inadvertently working in the wrong environment, which could affect data visibility, access controls, and audit trail integrity.
Session Timeout
For organizations using SSO, users are notified when their session expires and prompted to sign in again through their identity provider. The dialog displays: “You’ve Been Logged Out – It seems like you’ve been inactive for a while. To protect your data, we’ve ended your session.” Users can click “Log Me Back In” to re-authenticate. This ensures that only authenticated users maintain access and reduces the risk of unauthorized access from unattended sessions.
Caution: For accounts not using SSO, there is no default system timeout. Users should manually log out after completing their work or when stepping away from their workstation for an extended period.
Quality Control Monitoring
Franklin quality control (QC) metrics are displayed in the Workbench screen as a separated tab. These metrics include coverage metrics, sequencing quality, sex concordance, and relatedness checks. QC thresholds are assay-specific and configurable, allowing users to define hard or soft warnings based on predefined criteria. Each assay is monitored appropriately and potential quality issues are flagged before interpretation begins.
QC metrics include, among others: Average Variant Depth, Hom/Het Ratio, Ti/Tv Ratio, Variant Quality, Number of SNPs, Duplication Check, and Sex Detection. When QC warnings are detected, they are displayed prominently in the Quality Control tab for user review.
Mandatory Fields
Franklin enforces mandatory fields across key workflows to ensure completeness and standardization of critical data. These required fields must be filled before users can proceed. For example, when creating a new case, the Case Name field is mandatory and displays the validation message “Case Name is required” if left blank. This prevents submission of incomplete records and ensures that all necessary metadata is captured at the point of entry.
Case Processing Notifications
A notification is displayed in the Case Workbench when a case is still processing. This alert indicates that the analysis is not yet complete and additional time is needed, helping users avoid acting on incomplete or partial results. The message “Processing – We are evaluating confidence and prioritization for each variant” is shown until the analysis pipeline completes.
Important: Do not proceed with interpretation or reporting until processing is complete. Acting on partial results may lead to incorrect clinical conclusions.
Behavior During Security Incidents and Connectivity Loss
Franklin is designed to maintain data integrity and user safety during both suspected security incidents and connectivity disruptions. The following describes how the platform responds in these scenarios.
Security Incident Detection and Response
QIAGEN performs routine analysis of audit logs and system-level inspections to detect suspicious behavior, potential attacks, and security breaches. Continuous monitoring systems track system activities and identify suspicious events for timely intervention. When a security incident is suspected, the following measures apply:
Audit log analysis: All data access events are logged and retained for six years. Logs are analyzed to identify anomalous access patterns, unauthorized access attempts, and potential breaches.
Incident response plan: QIAGEN maintains a documented incident response plan with trained response teams, postmortem procedures, and stakeholder notification protocols.
Breach notification: In the event of a confirmed data breach, QIAGEN will notify affected customers and relevant regulatory authorities in accordance with applicable laws, including the EU GDPR, the Australian Privacy Amendment (Notifiable Data Breaches) Act 2017, HIPAA, and other regional requirements.
Root cause analysis: Following any incident, a root cause analysis is performed and corrective actions are implemented to prevent recurrence. Periodic reviews (at least annually) of actual incidents are conducted to identify necessary program updates.
Loss of Connectivity
As a cloud-based SaaS platform, Franklin requires an active internet connection for operation. If connectivity is lost during a session:
Any in-progress data uploads will be interrupted. Users will see a notification in the UI indicating the failure. Once connectivity is restored, users should re-initiate the upload.
Analysis pipelines running on the server side will continue to execute regardless of the user’s connection status. Results will be available upon reconnection.
Unsaved changes in the user interface (e.g., interpretation edits made within a popup dialog) may be lost if connectivity drops before the user clicks “Save.” Users are advised to save work frequently.
Session tokens may expire during extended outages. Users will be required to re-authenticate upon reconnection.
Note: For guidance on network requirements, firewall configuration, and recommended connectivity, refer to the Network and Connectivity Guidance article.
Secure Backup and Recovery
QIAGEN implements a comprehensive backup and disaster recovery strategy to protect all data stored and processed within the Franklin platform. The approach ensures high availability, data durability, and the ability to restore services within defined time frames.
Backup Architecture and Frequency
All user data is redundantly stored on multiple devices across multiple facilities, designed to sustain the concurrent loss of data in two facilities. This architecture provides 99.999999999% (eleven nines) durability and 99.99% availability of stored objects over a given year. Backup details include:
Backup frequency: Full daily backups are performed automatically for both customer data and computed data (data derived from customer uploads, including analysis results).
Backup storage: Backups are stored in secure, restricted areas within cloud services across different regions (per client approval) and availability zones, using Amazon AWS infrastructure that is ISO 27001 certified and compliant with HIPAA and GDPR standards.
Backup encryption: Backup data is encrypted using industry-standard encryption mechanisms before being archived, ensuring that sensitive information is protected during storage and transit.
Backup retention: Data is retained according to a documented retention policy and securely destroyed following industry standards and compliance requirements when no longer needed.
Recovery Time and Point Objectives
In the event of a service disruption or disaster, QIAGEN’s disaster recovery plan defines the following objectives:
Parameter | Target | Details |
Recovery Time Objective (RTO) | Immediate to 24 hours | Depends on failure type; ranges from immediate recovery for minor disruptions to full service restoration within 24 hours for major disasters. |
Recovery Point Objective (RPO) – Clinical Data | Up to 24 hours | Reports, clinical information (family pedigree, sample information, phenotypes) may have up to 24 hours of data loss in worst-case scenarios. |
Recovery Point Objective (RPO) – Sequencing Data | Up to 1 hour | Actual sequencing data (FASTQ, BAM, VCF files) has a tighter recovery point of 1 hour due to real-time replication. |
These objectives are achieved through continuous data backup and real-time replication techniques that ensure minimal data loss and swift restoration of services.
Disaster Recovery Testing
QIAGEN maintains documented procedures to regularly test disaster recovery plans, including:
Backup integrity testing to verify that stored backups can be successfully restored.
Failover procedures to validate that secondary systems can assume operations when primary systems are unavailable.
Recovery drills conducted at least annually, with results reviewed and reported to ensure minimal downtime and data integrity.
Production Verification Testing (PVT) conducted before any new release is deployed to the production environment.
Backing Up Configuration and Critical Data
As a cloud-hosted SaaS platform, Franklin’s configuration and critical data backup processes are managed by QIAGEN on behalf of all users. Users do not need to perform manual backups. The following describes how configuration and data are protected:
Data classification: Data is classified by sensitivity level to determine backup frequency and protection measures. Customer data and computed results receive full daily backups, while sequencing data benefits from real-time replication.
Configuration management: All platform configurations, including assay definitions, workflow settings, custom panels, and organizational preferences, are included in the automated backup schedule.
Offsite backup security: Offsite data backups maintain the same level of physical and logical security as in-use production data. Backups are stored in ISO 27001-certified facilities using encrypted storage across multiple geographic regions.
Data retrieval: Data can be retrieved from encrypted backups with strict controls to verify authorized access. Backup data is delivered in a compatible format for the recipient’s systems.
Important: While QIAGEN manages all server-side backups, users are responsible for retaining local copies of any files or reports they download from the platform. Downloaded files are not automatically re-generated from backups.
Note: For organizations with specific data residency requirements, Franklin supports hosting in multiple regions including Ireland (EU), United States, Canada, Australia, and Israel. Backup data is stored within the same regulatory jurisdiction as production data unless otherwise agreed.
Restoring to a Known Secure State After an Incident
In the event of a security incident, system failure, or data integrity concern, QIAGEN follows a structured process to restore the Franklin platform to a known secure state:
Incident containment: The affected components are isolated to prevent further impact. Access may be temporarily restricted while the investigation proceeds.
Impact assessment: The scope of the incident is evaluated, including which data, users, and systems were potentially affected.
System restoration: Production systems are restored from verified, encrypted backups. Real-time replication techniques are used where available to minimize data loss.
Configuration verification: All platform configurations, security policies, access controls, and workflow settings are verified against the known-good baseline.
Integrity validation: Data integrity checks are performed to confirm that restored data matches expected states. Audit trails are reviewed to identify any unauthorized modifications.
Service resumption: Once restoration and validation are complete, services are brought back online. Users may be required to re-authenticate.
Post-incident review: A root cause analysis is conducted, corrective actions are documented, and the incident response plan is updated as needed.
All changes to the production environment are managed through a documented change management procedure covering change request evaluation, impact assessment, approval, implementation, testing, documentation, and communication.
Note: Franklin’s infrastructure updates and software updates are managed with backup and disaster recovery plans that ensure no impact on normal system operation during scheduled maintenance windows.
Handling of Personal and Sensitive Data
Franklin processes sensitive data including patient information, diagnostic test results, clinical data, genomic sequencing data, and health records. QIAGEN implements comprehensive technical and procedural measures to ensure this data is protected throughout its entire lifecycle—from collection through storage, transmission, processing, to disposal.
Data Protection in Transit
All data transmitted over the internet is encrypted using industry-standard SSL/TLS (Secure Sockets Layer / Transport Layer Security) encryption with TLS 1.2 or higher. This applies to all user interactions with the platform, API communications, and data upload/download operations. Data remains in the cloud without the necessity of downloading information, reducing exposure to endpoint risks.
Data Protection at Rest
All user data is stored and processed in high-security data centers with backup power, environmental controls, and strict physical access controls. Data at rest is encrypted using AES-256 encryption. Storage infrastructure meets compliance requirements for ISO 27001, HIPAA, CLIA, GCP, 21 CFR Part 11, and applicable Data Privacy Safe Harbor regulations.
Data Segregation and Access Control
Franklin employs a multi-tenancy architecture with data segregation implemented at the application code level based on account identifiers. This ensures secure, isolated data management for each client organization. Access permissions are set on a per-user-per-sample basis, meaning a user’s access to a given file depends on specific conditions defined by their role and organizational policies.
Note: For detailed information about access controls, roles, and permissions, refer to the Access Control and Authentication article.
Data Loss Prevention
QIAGEN has established measures to prevent unauthorized data transfers, protecting sensitive information from external threats. These measures include network-level controls (VPCs, firewalls, WAF), application-level access restrictions, and monitoring for anomalous data access patterns.
Data Retention and Disposal
Data is retained in accordance with documented retention policies that specify how long data is kept and the procedures for secure destruction. Audit logs are maintained for six years to ensure regulatory compliance. Upon termination of a contract, security arrangements for user data are maintained until deletion is completed, ensuring protection throughout the disposal process.
Privacy Governance
QIAGEN maintains a dedicated privacy program with the following elements:
Data Privacy Officer: A dedicated Privacy Team led by a Data Privacy Officer provides specialized expertise and oversight on all privacy matters.
Privacy training: Annual privacy and cybersecurity training is completed by all employees.
Privacy impact assessments: Privacy due diligence is conducted for new features, integrations, and third-party engagements.
Lawful processing: Personal information is processed only for lawful purposes related to health outcomes, research, and clinical support, in accordance with applicable regulations.
Regulatory Compliance
Franklin’s data handling practices are aligned with the following regulatory frameworks:
Regulation / Standard | Scope |
EU General Data Protection Regulation (GDPR) | Data protection and privacy for individuals within the EU/EEA. |
HIPAA Privacy Rule | Protection of individually identifiable health information in the United States. |
ISO 27001 | Information security management system certification for data centers and infrastructure. |
Australian Privacy Act 1988 | Data protection and privacy for individuals in Australia, including the Notifiable Data Breaches scheme. |
Note: For information about encryption standards, infrastructure security, and network controls, refer to the Minimum Secure Operating Environment and Network and Connectivity Guidance articles.
