Last Updated: March 12, 2015
Iron.io provides cloud-based infrastructure services for message queuing and task processing. The services are used by organizations throughout the world to scale out workloads and create distributed and more fault-tolerant applications. Iron.io routinely audits and manages the security of its services and applies security best practices so customers can focus on building and scaling their applications.
Iron.io applies security controls at every layer of the system architecture including physical and service layer and interfaces. Iron.io also isolates customer message persistence and task environments and rapidly deploys security updates within its systems as necessary without customer interaction or service interruption.
In addition, the Iron.io platform is designed for stability, scaling, and inherently mitigates common issues that lead to outages while maintaining recovery capabilities.
Iron.io’s physical infrastructure is hosted and managed within Amazon and Rackspace secure datacenters, and utilizes service components and technologies within these datacenters for security protections at multiple layers and against multiple threat vectors. Amazon and Rackspace continually manage risk and undergo recurring assessments to ensure compliance with industry standards.
Amazon’s data center operations have been accredited under:
Rackspace’s datacenter operations have been accredited under:
Iron.io uses PCI-compliant cloud infrastructure providers AWS and Rackspace. These data center operations are PCI Service Provider Level 1 compliant. This is the most stringent level of certification available.
Iron.io uses PCI-compliant payment processor Stripe for encrypting and processing credit card payments. Stripe has been audited by a PCI-certified auditor and is certified to PCI Service Provider Level 1.
Iron.io utilizes ISO 27001 and FISMA certified data centers managed by Amazon and Rackspace. Amazon and Rackspace have many years of experience in designing, constructing, and operating large-scale data centers. Data centers are housed in nondescript facilities, and critical facilities have extensive setback and military grade perimeter control berms as well as other natural boundary protection. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, state of the art intrusion detection systems, and other electronic means. AWS authorized staff must pass two-factor authentication no fewer than three times to access data center floors. Rackspace makes use of keycard protocols and biometric scanning protocols and employs simply access restrictions. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.
Amazon and Rackspace only provides data center access and information to employees who have a legitimate business need for such privileges. Every data center employee undergoes multiple and thorough background security checks before hire. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee. All physical and electronic access to data centers by employees is logged and audited routinely.
For additional information see:
Advanced automatic fire detection and suppression equipment has been installed in the data centers to reduce risk. The fire detection system utilizes smoke detection sensors in all data center environments, mechanical and electrical infrastructure spaces, chiller rooms and generator equipment rooms. These areas are protected by either wet-pipe, double-interlocked pre-action, or gaseous sprinkler systems.
The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations, 24 hours a day, and seven days a week. Uninterruptible Power Supply (UPS) units provide back-up power in the event of an electrical failure for critical and essential loads in the facility. Rackspace denotes N+1 redundant UPS power subsystem, with instantaneous failover if the primary UPS fails. Data centers use on-site generators to provide backup power for the entire facility.
Climate control is required to maintain a constant operating temperature for servers and other hardware, which prevents overheating and reduces the possibility of service outages. Data centers are conditioned to maintain atmospheric conditions at optimal levels. Monitoring systems and data center personnel ensure temperature and humidity are at the appropriate levels. Rackspace records the employment of N+1 redundant HVAC system, ensuring duplicate system immediately comes online should there be an HVAC system failure and circulates and filtered air every 90 seconds to remove dust and contaminants.
Data center staff monitor electrical, mechanical and life support systems and equipment so issues are immediately identified. Preventative maintenance is performed to maintain the continued operability of equipment.
For additional information see:
Firewalls are utilized to restrict access to systems from external networks and between systems internally. By default all access is denied and only explicitly allowed ports and protocols are allowed based on business need. Each system is assigned to a firewall security group based on the system’s function. Security groups restrict access to only the ports and protocols required for a system’s specific function to mitigate risk.
Host-based firewalls restrict customer tasks from establishing localhost connections over the loopback network interface to further isolate customer tasks. Host-based firewalls also provide the ability to further limit inbound and outbound connections as needed.
Our infrastructure provides DDoS mitigation techniques including TCP Syn cookies and connection rate limiting in addition to maintaining multiple backbone connections and internal bandwidth capacity that exceeds the Internet carrier supplied bandwidth. We work closely with our providers to quickly respond to events and enable advanced DDoS mitigation controls when needed.
Managed firewalls prevent IP, MAC, and ARP spoofing on the network and between virtual hosts to ensure spoofing is not possible. Packet sniffing is prevented by infrastructure including the hypervisor which will not deliver traffic to an interface which it is not addressed to. Iron.io utilizes application and task isolation, operating system restrictions, and encrypted connections to further ensure risk is mitigated at all levels.
Port scanning is prohibited and every reported instance is investigated by our infrastructure providers. When port scans are detected, they are stopped and access is blocked.
Iron.io uses SSL to secure data in transit and and OAuth tokens for account authorization. During the SSL/TLS handshake, the client and Iron.io service negotiate encryption keys and certificates with each other before any application data is exchanged. This ensures encrypted data sent by the client can only be decrypted by the service, and vice versa. SSL certificates are updated on a regular basis or in the event of a security advisory from external security centers. OAuth provides secure and unique identity tokens that can be revoked and regenerated by the user without compromising user identity.
Message data and task payloads are replicated across two or more zones. Data storage components inherit the security measures described in this document including strict physical data center and system security and network isolation. Messages and payloads can be encrypted for additional security of data at rest. Messages within dedicated clusters within the message persistence layer are isolated and segmented from public clusters. Limited data retention policies can be employed to further reduce availability of data within the system.
Each task running within the IronWorker system runs within its own isolated environment and cannot interact with other tasks or areas of the system. This restrictive operating environment isolates security and stability issues at the task level. These self-contained Docker environments use LXC containers to isolate processes, memory, and the file system while host-based firewalls restrict applications from establishing local and inbound network connections.
For additional technical information on IronWorker’s secure container environment see: http://blog.iron.io/2014/04/how-docker-helped-us-achieve-near.html
Decommissioning hardware is managed by our infrastructure providers using a process designed to prevent customer data exposure. AWS uses techniques outlined in DoD 5220.22-M (“National Industrial Security Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”) to destroy data. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.
For additional information see: https://aws.amazon.com/security
Iron.io maintains separate environments for the component of its systems including the main website, HUD/dashboard, dev center, client libraries, and core services. The core services are further separated into specific components. Iron.io uses continuous integration practices and maintains test, staging and production environments for all its systems.
System configuration and consistency is maintained through standard, up-to-date images, configuration management software, and by replacing systems with updated deployments. Systems are deployed using the most current images which are routinely updated with configuration changes and security updates and run through a suite of pre-production tests before deployment. Virtual instances no longer in service are reset by our infrastructure providers so that data is never unintentionally exposed. This process includes resetting every block of storage used by the system and scrubbing (setting to zero) memory.
System access is limited to Iron.io staff and requires ssh keys for identifying trusted computers along with usernames and passwords. Furthermore, operating systems running within AWS do not allow password authentication to prevent password brute force attacks, theft, and sharing.
Our vulnerability management process is designed to remediate risks without customer interaction or impact. Iron.io is notified of vulnerabilities through internal and external assessments, system patch monitoring, and third party mailing lists and services. Each vulnerability is reviewed to determine if it is applicable to Iron.io’s environment, ranked based on risk, and assigned to the appropriate team for resolution.
Iron.io deploys images to cloud servers and uses configuration management tools to create, scan, and validate these images. Run-time access to OS and system-level functions is restricted from outside access and no outside files or programs can run or be stored outside of the LXC containers that are used within IronWorker. New systems are deployed with the latest updates, security fixes, and platform configurations and can be put into production as soon as they pass all functional, load, pre-production, and system tests. Existing systems can be decommissioned on the fly and replaced with new systems with no interruption of service. This process allows Iron.io to respond quickly and keep the service operating environments up-to-date.
Iron.io runs penetration test against system API interfaces as part of the continual test suite that runs every four hours. These tests validate API access and ensures authorization works as specified. System tests using commercial grade penetration testing software are run as part of the functional tests that run prior to deploying any new images for Iron.io components.
System components undergo penetration tests, vulnerability assessments, and source code reviews to assess the security of our application interface, architecture, and services layers. Iron.io engages with third-party security advisers and consultants to review the security of the Iron.io services and application layers and apply best practices.
All databases containing system and customer data are fully redundant within two or more zones as well as fully backed up on a daily basis to secure, access-controlled, and redundant storage facilities. System binaries and images and other service components are also individually backed up in the same manner and to the same degree. Each system component or image can be restored from backups as may be necessary.
In addition and in advance of standard backup practices referenced above, Iron.io’s system infrastructure scales and provides fault tolerant by automatically taking failed instances off-line and replacing them with new instances. Our data persistence layer will switch between redundant fully current databases without loss or disruption of messages or tasks in the event of a node or zone failure.
Iron.io maintains redundancy within each component and each layer to prevent single points of failure. Iron.io utilizes multiple zones for all system components, persists and replicates data across zones, and offers services in multiple data centers including automatic DNS failover for added geographic resilency. The IronMQ platform is deployed across multiple data centers all running the most current system images and in the event of replicated system/data loss, can make use of system and data backups.
Our platform automatically switches to additional servers and datastores within the same zone or within other zones in the case of an outage. The Iron.io platform is designed to dynamically route requests within the system, monitor for failures, take failed components offline, and swap in new components without service disruption.
Customer messages that have been deleted and task payloads that have been processed are retained within the system for no greater than 30 days. After the retention period, this data is purged from the system and is not available or accessible to customers or Iron.io staff. Custom data retention policies can be put in place under certain plans so as to reduce the duration of retention.
Iron.io reviews system and service incidents immediately after incidents to understand the root cause, impact to customers, and improve the platform and processes. We keep an internal wiki with a log of all previous service incidents and their fixes so that we can be sure to solve the issue quickly should it arise again. Finding the root cause of these incidents is of utmost importance to Iron.io and we always strive to release a fix as soon as possible.
For additional information see: http://www.iron.io/privacy
Iron.io cannot access any messages or task payloads that have been encrypted at the client level. Customer data is access controlled and all access by Iron.io staff is accompanied by customer approval and recorded for audit purposes.
As a condition of employment all Iron.io employees undergo pre-employment background checks and agree to company policies including security, privacy, and acceptable use policies.
Our security team is lead by the Information Security officer (ISO) and includes staff responsible for application and information security. The security team works closely with engineering, operations, support, and other teams to address risk and implement appropriate security, privacy, and disaster recovery measures.
Enable HTTPS for applications and SSL database connections to protect sensitive data transmitted to and from applications.
Customers with sensitive data can encrypt messages and task payloads to meet their data security requirements. Data encryption can be deployed within clients sending and receiving messages and task payloads using industry standard encryption and the best practices for your language or framework.
Message queuing and task processing have limited durations and so where possible, you should limit data retention within the Iron.io system to only those data durations needed to accomplish the work.
To prevent unauthorized account access use a strong passphrase for your Iron.io user account and make use of Iron.io’s role-based access control (RBAC) model to invite share projects rather than sharing user accounts. Iron.io provides numerous ways for users to protect their tokens and actively supports and encourages the use of config files or environment variables throughout our platform. Tokens and project ids should be distributed to the team but not displayed publicly or in version control systems. Due to the token based system, it is easy to replace keys if lost or disclosed, and users are encouraged to rotate tokens at regular intervals.
Elastic IP ranges are availble within certain plans and can and should be used within tasks running on the IronWorker platform to secure and isolate interactions between IronWorker and client applications. We furthermore recommend customers encrypt data being transmitted from Iron.io services as well as authentic incoming requests from workers running on the Iron.io platform.
For more information on these topics, see http://dev.iron.io/worker/reference/security/
Logging can be critical for troubleshooting and investigating issues. Iron.io provides logging support within IronWorker including native logs as well as access gateways for real-time third party logging via syslog output from the IronWorker platform. Sensitive data should not be logged, however, as that introduces another data storage source with separate data retention durations.
For additional technical information on logging, see: http://dev.iron.io/worker/#inspect
Tasks running within IronWorker may choose to use third party services for added functionality such as Amazon’s S3, an email service provider such as SendGrid, Mandrill, or Mailgun, or other service providers. Be mindful of the access methods and data shared with these providers and their security practices as you would be with Iron.io. Where possible, use separate credentials within the IronWorker platform for accessing external systems.
For more information, see http://dev.iron.io/worker/reference/security/