SPF

Myths and Legends of SPF

SPF is an abbreviation for Sender Policy Framework (SPF) for Authorizing Use of Domains in Email. Email domains use this protocol to specify which Internet hosts are authorized to use this domain in the SMTP HELO and MAIL FROM commands. You do not have to use any additional software to publish the SPF policy and therefore this procedure is extremely simple: Simply add a TXT record containing the policy to the DNS zone. An example of this type of entry is given at the end of this article. There are numerous manuals and even online constructors for working with SPF.

The first ever version of the SPF standard was approved more than 10 years ago. During this time, numerous implementations and application practices have been developed. In addition, a new version of the standard has been released. But the most surprising is that for some reason SPF, more than any other standard, has grown over 10 years with an incredible amount of myths and misconceptions that wander from article to article and with an enviable regularity pop up in discussions and answers to questions on the forums. At the same time, the protocol itself seems very simple: implementation takes only a couple minutes. Let’s try to recall and analyze the most common misconceptions.

1. Misconception: SPF will protect my domain from spoofing

Fact: SPF does not protect the sender’s address that is visible to the user.

Explanation: SPF does not work with the contents of the message that the user sees, in particular, the sender’s address. SPF authorizes and verifies addresses at the mail transport level (SMTP) between two MTAs (envelope-from, RFC5321.MailFrom aka Return-Path). These addresses are not visible to the user, and they can differ from those in the From header that the user sees (RFC5322.From). Thus, nothing prevents a message with a fake sender in the ‘From’ header from being authorized with SPF.

Use DMARC to protect visible domain name from spoofing.

2. Misconception: After implementation, SPF will improve security and combat spam

Fact: most likely, you will not see any significant changes in terms of security and spam.

Explanation: SPF is originally an altruistic protocol, so it does not provide any advantages to anyone who publishes the SPF policy. Theoretically, if you implemented SPF, this protocol could protect someone else from receiving fake emails from your domain. But in fact, even this assumption is not true, because the results of applying SPF are rarely used directly (we’ll discuss all of this later). Moreover, even if all domains published SPF, and all recipients forbade receiving messages without SPF authorization, such an approach would hardly reduce the amount of spam.

SPF does not protect against spoofing or spam directly, nevertheless, this protocol is actively and successfully used to deploy spam filtering systems, as well as to protect against counterfeit emails, since it allows you to check each message against a specific domain and its reputation.

3. Misconception: SPF negatively (positively) influences email deliverability

Fact: It all depends on the type of message, the way it is delivered and your reputation

Explanation: SPF is not meant to affect email deliverability within a standard flow, and adversely impacts the improper implementation or indirect flows of messages, when users receive such messages from a server that differs from that from which the message was sent, for example, this applies to redirected emails. But spam filtering systems and reputation-based classifiers take into account the availability of SPF and reputation of authorizing domain, and this generally gives a positive result with respect to the standard message flow. Unless, of course, you yourself are a spammer.

4. Misconception: SPF provides authorization of the email sender

Fact: SPF provides authorization of the email server that sends a message on behalf of a domain

Explanation: Firstly, SPF works only at the domain level, and not at the level of individual email addresses. Secondly, even if you are a legitimate email user of a specific domain, SPF does not allow you to send messages from anywhere that you wish. In order for your message to successfully pass SPF validation, you must send it only from an authorized server. Thirdly, if you authorized a server using SPF (for example, you could allow sending emails from your domain via any ESP or hosting provider), and this server does not impose any additional restrictions, then all users of this server are authorized to send messages on behalf of your domain. Please keep this in mind when implementing SPF and providing authentication of email messages in general.

5. Misconception: Email messages not authorized by SPF will be rejected

Fact: In general, SPF authorization or lack thereof does not have a significant impact on the delivery of email messages.

Explanation: SPF is only an authorization standard, and it explicitly indicates that actions to be applied to email messages that were not authorized are outside the scope of the standard and are governed by the recipient’s local policy. If there is a ban on receiving such messages, this leads to problems with messages going through indirect delivery routes, for example, when using redirection or mailing lists, and you should consider this fact in the local policy. In practice, it is not recommended to use a strict ban in case of an SPF authorization failure. Standard allows (but does not require) strict ban only when the domain publishes the -all (hardfail) policy in the absence of other filters. In most cases, SPF authorization is used as one of the factors in the weighted systems. At the same time, this factor will have an insignificant weight, because violation of SPF authorization is usually not a reliable indicator of spam: many spam messages successfully pass SPF authorization, and legal ones often can not do this, and it is unlikely that we will ever witness cardinal changes in this field. If we look at it this way, there is no difference between -all and ~all.

SPF authorization is not so important in terms of message delivery or spam filtering, but it allows for confirmation of the sender’s address and the relationship with the domain, as well as the use of the reputation of the domain instead of the IP reputation for this message.

DMARC policy has a much more significant influence on the decision-making on further actions in relation to handling a message that has not passed authorization. DMARC allows you to reject (or quarantine) all or part of messages that have not been authorized.

6. Misconception: SPF recommends using -all (hardfail), since it is safer than ?all or ~all

Fact: In fact, -all does not affect security in any way, but it negatively affects the delivery of messages.

Explanation: -all results in the blocking of messages that were sent through indirect routes by those few recipients who use SPF directly and block messages. At the same time, this policy will not have a significant impact on most spam and fake messages. At the moment, ~all (softfail) is considered the most appropriate policy and it is used by almost all large domains, even those that impose very strict security requirements (such as paypal.com). -all can be used for domains that are not used for sending legitimate emails. DMARC considers -, ~ and ? as equivalents.

7. Misconception: It is sufficient to configure SPF only for domains that are used to send mail

Fact: It is also necessary to configure SPF for domains that are used in HELO on mail servers. In addition, it is recommended to apply a blocking policy for MX, A records and the wildcard that are not used to send emails.

Explanation: In some cases, in particular, when delivering NDR (a non-delivery report), DSN (delivery status notification) and some auto-responses, the address of the sender in the SMTP envelope (envelope-from) will be empty. In this case, SPF checks the host name from the HELO/EHLO command. You need to check the name from this command (for example, by opening the server configuration or by sending an email to a public server and checking the headers) and enable SPF for this name.

Spammers can use not only the same domains that you use to send messages, they can send spam on behalf of any host that has an A- or MX record. Therefore, if you publish SPF from altruistic considerations, then you need to add SPF for all such records, and it is also desirable to add a wildcard (*) for nonexistent records.

8. Misconception: It’s better to add a special SPF type record to DNS (instead of TXT)

Fact: It must be a TXT record.

Explanation: According to the current version of the SPF standard (RFC 7208), SPF type DNS records are deprecated and should no longer be used.

9. Misconception: it is recommended that you include as many of the available elements in SPF as possible (a, mx, ptr, include), because this can reduce the likelihood of an error

Fact: it is necessary to minimize the SPF record and it is recommended to specify only addresses of the networks via ip4/ip6.

Explanation: There is a limit of 10 DNS queries for resolving the SPF policy. Exceeding this limit will result in a permanent policy error (permerror). Moreover, DNS is an unreliable service, so there is a probability of a failure (temperror) for each request, which increases with the number of requests. Each additional a or include record requires an additional DNS request; as for include, it is also necessary to request all elements specified in the include record. mx requires to request MX records and an additional A record request for each MX server. ptr requires an additional request, moreover, it is inherently unsafe. Only the addresses of the networks listed through ip4/ip6 do not require additional DNS requests.

10. Misconception: TTL for the SPF record should be smaller (larger)

Fact: As for most DNS records, it is better to choose TTL from the range of 1 hour to 1 day and reduce it in advance during the deployment or implementation of planned changes, or increase, when the policies are stable.

Explanation: A higher TTL reduces the likelihood of DNS errors and, as a consequence, SPF temperrors, but it increases the response time when it is necessary to make changes to the SPF record.

11. Misconception: If I don’t know which IP addresses can be used to send my messages, then it is better to publish the policy with +all

Fact: A policy with an explicit +all or an implicit rule that enables mailing on behalf of the domain name from any IP address will negatively affect the delivery of emails.

Explanation: Such a policy does not make sense, and it is often used by spammers to ensure SPF authentication of spam messages that are sent through botnets. Therefore, a domain that publishes such a policy risks being blocked.

12. Misconception: It does not make sense to use SPF

Fact: It is necessary to use SPF.

Explanation: SPF is one of the mechanisms for authorizing the sender in email and the way to identify the domain in reputation-based systems. Currently, large email service providers are gradually beginning to require the authorization of messages, and messages that do not have authorization can be subject to “penalties” in terms of delivery or display to the user. In addition, there may be no auto-responses and notifications on delivery or non-delivery for messages that have not passed SPF authorization. The reason is that such responses are usually sent exactly to the SMTP envelope address SPF authorizes and require that it is authorized. Therefore, SPF is required even if all messages are authorized by DKIM. Also, SPF is a must for IPv6 networks and cloud services: In such networks, it is almost impossible to use the reputation of IP addresses, and messages from addresses without SPF authorization will, as a rule, not be accepted. In accordance with the standard, one of the primary tasks of SPF is to use the reputation of a domain name instead of the IP reputation.

13. Misconception: SPF is self-sufficient

Fact: DKIM and DMARC are also necessary.

Explanation: DKIM is required to successfully forward email messages. DMARC is required to protect the sender’s address from spoofing. In addition, DMARC allows you to receive reports on violations of the SPF policy.

14. Misconception: Two SPF records are better than one

Fact: The record must be exactly one.

Explanation: This requirement is described in the standard. If there is more than one record, this will result in a permanent error (permerror). If it is necessary to merge several SPF records, simply publish a record with several include operators.

15. Misconception: spf1 is good, but spf2.0 is better

Fact: You should use v=spf1.

Explanation: spf2.0 does not exist. By publishing the spf2.0 record, you can expose yourself to the risk of unpredictable results. spf2.0 has never existed and it is not a standard, but it was mentioned in the experimental standard RFC 4406 (Sender ID), which was based on the assumption that such a standard would be adopted, since the corresponding discussions did take place. Sender ID, which was supposed to solve the problem of address spoofing, did not become a generally accepted standard, and you should use DMARC instead. Even if you decide to use Sender ID and publish the spf2.0 entry, it will not be a replacement for the spf1 entry.

I had almost finished writing this article, when I was suddenly intercepted by our Customer Support staff who strongly (and categorically) recommended that I recall the following nuances of SPF that they often have to deal with when solving various issues:

  1. SPF policy should end with the all or redirect directive. There should be absolutely nothing after these directives.
  2. all or redirect directives can be used in this policy exactly once, and they replace each other (that is, one policy can not include all and redirect simultaneously).
  3. The include directive does not replace all or redirect. include can be used several times, but your policy should still be terminated with the all or redirect directive. include or redirect should be used to include a valid policy terminated with all or redirect. At the same time, include does not pay attention to the rule (-all, ~all?all) that is used for all in the included policy, however, for redirect it makes a difference.
  4. include is used with colons (include:example.com), and redirect requires the sign of equality (redirect=example.com).
  5. SPF does not cover subdomains. SPF DOES NOT СOVER SUBDOMAINS . SPF. DOES. NOT. COVER. SUBDOMAINS. (And DMARC, by default, covers them). You should publish SPF for each A or MX record in DNS, if those records are used or can be used to deliver emails.

Summary

  • You should be sure that you create a policy for all MX and, preferably, all A records.
  • Add SPF policies for the names used in the HELO/EHLO of the mail server.
  • Publish the SPF policy as a TXT record with v=spf1 in DNS.
  • Try to use the addresses as ip4/ip6 networks in your policy. Specify them at the beginning of the policy to avoid unnecessary DNS requests. Minimize the use of include, try to do without a, use mx only in case of extreme and irresistible necessity, and never use ptr.
  • Specify ~all for domains that are really used to send emails, -all for unused domains and records.
  • Use small TTL during the implementation and testing period, and then increase TTL to the appropriate values. Before making any changes, reduce the TTL beforehand.
  • Be sure to validate your SPF policy, for example, here.
  • Do not limit yourself to the deployment of SPF, try to implement DKIM and always implement DMARC. DMARC protects your messages from spoofing and allows you to receive information on violations of the SPF. You will be able to detect forgery, indirect message flows and configuration errors.
  • If you read Russian (or just for fun) after implementing SPF, DKIM and/or DMARC, check them using https://postmaster.mail.ru/security/. SPF and DMARC are validated according to the current state; DKIM is checked using the statistics for the previous day only if there is correspondence with the boxes on Mail.Ru on the previous day or earlier.
  • There are good SPF BCPs from M3AAWG

Sample SPF policy: @ IN TXT “v=spf1 ip4:1.2.3.0/24 include:_spf.myesp.example.com ~all”

Create your SPF record

SPF authenticates a sender’s identity by comparing the sending mail server’s IP address to the list of authorized sending IP addresses published by the sender in the DNS record. Here’s how to create your SPF record:

  • Start with v=spf1 (version 1) tag and follow it with the IP addresses that are authorized to send mail. For example, v=spf1 ip4:1.2.3.4 ip4:2.3.4.5
  • If you use a third party to send email on behalf of the domain in question, you must add an “include” statement in your SPF record (e.g., include:thirdparty.com) to designate that third party as a legitimate sender
  • Once you have added all authorized IP addresses and include statements, end your record with an ~all or -all tag
  • An ~all tag indicates a soft SPF fail while an -all tag indicates a hard SPF fail. In the eyes of the major mailbox providers ~all and -all will both result in SPF failure. Return Path recommends an -all as it is the most secure record.
  • SPF records cannot be over 255 characters in length and cannot include more than ten include statements, also known as “lookups.” Here’s an example of what your record might look like:
  • v=spf1 ip4:1.2.3.4 ip4:2.3.4.5 include:thirdparty.com -all  
  • For your domains that do not send email, the SPF record will exclude any modifier with the exception of -all. Here’s an example record for a non-sending domain:
  • v=spf1 -all

Verify published SPF

AWS PostgreSQL RDS

Intro

The World’s Most Advanced Open Source Relational Database

PostgreSQL is winning this title for the second year in a row. First released in 1989, PostgreSQL turns 30 this year and is at the peak of its popularity, showing no signs of ageing with a very active community. It’s the fastest growing DBMS last three years.

Everyone wants a fast database, I think that is what everybody can agree upon. The question is: fast in what respect?
In terms of databases, there are at least different directions of fast:

  • number of transactions per second
  • throughput or amount of data processed

These are interrelated but definitely not the same.
And both have completely different requirements in terms of IO. In general, you want to avoid IO at all cost. This is because IO is always slow in comparison to access to data in memory, CPU-caches of different levels or even CPU registers. As a rule of thumb, every layer slows down access by about 1:1000.
For a system with high demand for a large number of transactions per second, you need as many concurrent IOs as you can get, for a system with high throughput you need an IO subsystem which can deliver as many bytes per seconds as possible.

That leads to the requirement to have as much data as possible near to the CPU, e.g. in RAM. At least the working set should fit, which is the set of data that is needed to give answers in an acceptable amount of time.

Each database engine has a specific memory layout and handles different memory areas for different purposes.

To recap: we need to avoid IO and we need to size the memory layout so that the database is able to work efficiently (and I assume that all other tasks in terms of proper schema design are done).

Here are the critical parameters

  • max_connections
    This is what you think: the maximum number of current connections. If you reach the limit you will not be able to connect to the server anymore. Every connection occupies resources, the number should not be set too high. If you have long run sessions, you probably need to use a higher number as if the sessions are mostly short-lived. Keep aligned with configuration for connection pooling.
  • max_prepared_transactions
    When you use prepared transactions you should set this parameter at least equal to the amount of max_connections, so that every connection can have at least one prepared transaction. You should consult the documentation for your prefered ORM to see if there are special hints on this.
  • shared_buffers
    This is the main memory configuration parameter, PostgreSQL uses that for memory buffers. As a rule of thumb, you should set this parameter to 25% of the physical RAM. The operating system itself caches as much data to and from disk, so increasing this value over a certain amount will give you no benefit.
  • effective_cache_size
    The available OS memory should equal shared_buffers + effective_cache_size. So when you have 16 GB RAM and you set shared_buffers to 25% thereof (4 GB) then effective_cache_size should be set to 12 GB.
  • maintenance_work_mem
    The amount of this memory setting is used for maintenance tasks like VACUUM or CREATE INDEX. A good first estimate is 25% of shared_buffers.
  • wal_buffers
    This translates roughly to the amount of uncommitted or dirty data inside the caches. If you set this to -1, then PostgreSQL takes 1/32 of shared_buffers for this. In other words: when you do so and you have shared_buffers = 32 GB, then you might have up to 1G of unwritten data to the WAL (=transaction) log.
  • work_mem
    All complex sorts benefit from this, so it should not be too low. Setting it too high can have a negative impact: a query with 4 tables in a merge join occupies 4xwork_mem. You could start with ~ 1% of shared_buffers or at least 8 MB. For a data warehouse, I’d suggest starting with much larger values.
  • max_worker_processes
    Set this to the number of CPUs you want to share for PostgreSQL exclusively. This is the number of background processes the database engine can use.
  • max_parallel_workers_per_gather
    The maximum workers a Gather or GatherMerge node can use (see documentation about details), should be set equal to max_worker_processes.
  • max_parallel_workers
    Maximum parallel worker processes for parallel queries. Same as for max_worker_processes.

The following settings have direct impact on the query optimizer, which tries its best to find the right strategy to answer a query as fast as possible.

  • effective_io_concurrency
    The number of real concurrent IO operations supported by the IO subsystem. As a starting point: with plain HDD try 2, with SSDs go for 200 and if you have a potent SAN you can start with 300.
  • random_page_cost
    This factor basically tells the PostgreSQL query planner how much more (or less) expensive it is to access a random page than to do sequential access.
    In times of SSDs or potent SANs this does not seem so relevant, but it was in times of traditional hard disk drives. For SSDs an SANs, start with 1.1, for plain old disks set it to 4.
  • min_ and max_wal_size
    These settings set size boundaries on the transaction log of PostgreSQL. Basically this is the amount of data that can be written until a checkpoint is issued which in turn syncs the in-memory data with the on-disk data.

User cases

8 GB RAM, 4 virtual CPU cores, SSD storage, Data Warehouse:

  • large sequential IOs, due to ETL processes
  • large result sets
  • complex joins with many tables
  • many long lasting connections

So let’s look at some example configuration:

max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
maintenance_work_mem = 1GB
wal_buffers = 16MB
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 64MB
min_wal_size = 4GB
max_wal_size = 8GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4

2 GB RAM, 2 virtual CPU, SAN-like storage, a blog-engine like WordPress

  • few connection
  • simple queries
  • tiny result sets
  • low transaction frequency
max_connections = 20
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
wal_buffers = 16MB
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 26214kB
min_wal_size = 16MB
max_wal_size = 64MB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2

Rasperry Pi, 4 arm cores, SD-card storage, some self-written Python thingy

max_connections = 10
shared_buffers = 128MB
effective_cache_size = 384MB
maintenance_work_mem = 32MB
wal_buffers = 3932kB
random_page_cost = 10 # really slow IO, really slow
effective_io_concurrency = 1
work_mem = 3276kB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4

More details

Vulerabilities

Into

There are two types of vulnerabilities:

  • SSL Certificate Vulnerabilities
  • SSL Endpoint Vulnerabilities

SSL Certificate Vulnerabilities

Certificate Name Mismatch

If the exact domain name (FQDN) in the SSL Certificate does not match the domain name displayed in the address bar of the browser

Internal names

After November 1, 2015, CAs will no longer issue certificates to internal names. Internal names cannot be implicitly validated because they cannot be externally verified.

Missing or Misconfigured Fields and Values in Certificates

Certificates that do not contain the necessary fields and values may cause browsers to display warnings.

SHA-1 Hashing Algorithm

The current acceptable key strength for an RSA (Rivest-Shamir-Adleman) key is 2048 bits. Certificates must be generated with an RSA key of 2048 bits or higher.

Weak Hashing Algorithm

Algorithms once thought of as secure and unbreakable have become either weak or breakable. For example, MD5, RC4, etc

Weak Keys

Exhaustive key searches/brute force attacks are a danger to any secure network. As computational power increases, so does the need for stronger keys.

SSL Endpoint Vulnerabilities

How to do SSL protocol check

For general SSL checking of a website protocols in use: https://www.ssllabs.com/ssltest/analyze.html or: https://paranoidsecurity.nl/
The easiest way to amend (enable/disable) protocols in use is achieved by a tool called ”IISCripto” at https://www.nartac.com/Products/IISCrypto/

These are the IISCripto tool Features supported:

  • Single click to secure your site using best practices
  • Stop FREAK, BEAST and POODLE attacks
  • Easily disable SSL 2.0 and SSL 3.0
  • Enable TLS 1.1 and 1.2
  • Disable other weak protocols and ciphers
  • Enable forward secrecy
  • Reorder cipher suites
  • Templates for compliance with government and industry regulations – FIPS 140-2 and PCI

POODLE and TLS_FALLBACK_SCSV

We’ve been seeing a lot of requests to implement TLS_FALLBACK_SCSV. Unfortunately, it only works if you already have clients that understand it. This article will give some background, discuss TLS downgrades and finally have some suggestions for what you can do now.

Background
Over the years, cryptographers have worked with Internet engineers to improve the Transport Layer Security (TLS) protocol. Each revision of the protocol provides improvements to defend against the latest attacks devised by the cryptographers.
In 2011, Thai Duong and Juliano Rizzo demonstrated a proof-of-concept for attacks against the way SSL/TLS use the Cipher Block Chaining (CBC) mode of encryption. Their paper introduced the Browser Exploit Against SSL/TLS (BEAST) and used it to demonstrate how a “man-in-the-middle” (MITM) with the ability to run Javascript on the victim client could generate many thousands of TLS sessions and eventually recover a session cookie.
The defense against the BEAST attack was included in TLS 1.1, but few web servers or clients migrated right away.

POODLE is a similar attack to BEAST, but works against SSLv3 due to the special structure of the padding in SSLv3. Part of the POODLE attack is a downgrade to SSLv3.

TLS Downgrades
TLS agents should negotiate the highest version of the protocol supported by client and server. Clients advertise the highest version of the protocol they support. The server selects the highest version it supports, and sends the negotiated version number in the ServerHello message.

Many broken TLS implementations in widespread use were unable to cope with versions they did not understand. This caused large numbers of TLS sessions to break during the TLS 1.1 rollout.

The browser vendors implemented a downgrade mechanism. Immediately after a session fails during the initial handshake, the browser will retry, but attempt a max version one lower than before. After attempting to connect to a server with the max version set to TLS1.1, the client would retry with the max version set to TLS1.0.

Security researchers love automatic downgrades because they get to attack older protocols instead of newer, more secure protocols. A MITM attacker can break client connections during the initial TLS handshake, triggering a downgrade all the way to SSLv3.

Bodo Möller and Adam Langley devised Signaling Cipher Suite Value (SCSV) ,TLS_FALLBACK_SCSV, so the client can inform the server of a downgraded connection. It indicates to the server that the client had previously attempted to connect using a higher max protocol version, but the session was broken off before the initial handshake could be completed.

If the server sees the SCSV, and if it could have negotiated a protocol version higher than what the client is currently announcing as its maximum, the server must break the connection.

On October 21, Möller and Langley presented to members of the IETF TLS Working Group to lay out their rationale and argue for inclusion of the TLS_FALLBACK_SCSV draft in the upcoming revision to TLS.

The key points to their argument are presented on page 3:

Ideally, stop doing downgrade retries
If that’s not practical, clients should add TLS_FALLBACK_SCSV to ClientHello.cipher_suites in fallback retries
Servers that detect TLS_FALLBACK_SCSV will reject the connection if ClientHello.client_version is a downgrade

Möller and Langley implemented TLS_FALLBACK_SCSV in Chrome, Firefox, and Google servers earlier in 2014. During the past several months, they’ve had an opportunity to confirm that it allows SSLv3 connections only in cases where that is truly the highest common protocol version.

On October 15, OpenSSL for the first time integrated TLS_FALLBACK_SCSV code.

Upgrade your clients
The best protection against POODLE is to disable SSLv3. That is not always possible because of legacy clients. Unfortunately, TLS_FALLBACK_SCSV requires both clients and servers to implement it. Legacy clients will not send TLS_FALLBACK_SCSV. You must update the clients to newer code to get all the advantages of TLS_FALLBACK_SCSV.
If you can update all your clients to code that supports TLS1.x, then you can successfully disable SSLv3 from your BIG-IPs using either method described earlier.
If you can’t upgrade your clients, you can avoid POODLE by using SSL3’s RC4 stream cipher instead of a block cipher with CBC. Be aware that there are known weaknesses in RC4.

RC4 usage

Protocols:

  • SSLv2: Released in 1995. Most modern clients do not support SSLv2, but the DROWN attack demonstrated that merely serving SSLv2 enables the inspection of traffic encrypted with more modern TLS versions.
  • SSLv3: Released in 1996. Considered to be insecure after the POODLE attack was published in 2014. Turning off SSLv3 effectively removes support for Internet Explorer 6.

Ciphers:

SSLv3 POODLE mitigation recommendations

In our previous post, we discussed POODLE and legacy SSLv3 clients.
The best solution to POODLE is to disable SSLv3.
However, SSLv3 often can’t be disabled because legacy clients only speak SSLv3.
F5’s security teams have done some investigation, and we believe that using the RC4 can be used as POODLE mitigation for those legacy clients. RC4 is a stream cipher and is not vulnerable to the POODLE attack.
RC4 does have a known weakness. After hundreds of millions of messages, an attacker could recover the plaintext.
POODLE can recover information after only tens of thousands of attacks.
So even though RC4 is not recommended as a cipher, it remains more secure to use in SSLv3 sessions than AES-CBC.
If you cannot disable SSLv3, you may enable RC4-SHA only for use in SSLv3 sessions until you are able to replace all the legacy clients.
To configure your virtual server to only allow SSLv3 RC4-SHA, use a cipher string like the following:
“default:-RC4:-SSLv3:SSLv3+RC4-SHA”
“Default” sets the default ciphers, “-RC4″ removes any ciphers that contain RC4 (this is optional). “-SSLv3” removes any SSLv3 ciphers, but “SSLv3+RC4-SHA” re-enables only the RC4-SHA cipher from SSLv3. Any client connecting via SSL3 will be forced to use RC4 rather than a CBC cipher that is vulnerable to POODLE.
See SOL 13171 for information on setting your cipher string.
There are known attacks against RC4 that are better than brute-force. But given POODLE, RC4 is the most secure SSLv3 cipher.
It is still recommended to disable SSLv3 and RC4 once you are able to remove all legacy clients.

Who still needs RC4

At the time we had an internal debate about turning off RC4 altogether, but statistics showed that we couldn’t. Although only a tiny percentage of web browsers hitting CloudFlare’s network needed RC4 that’s still thousands of people contacting web sites that use our service.
To understand who needs RC4 I delved into our live logging system and extracted the User-Agent and country the visitor was coming from. In total, roughly 0.000002% of requests to CloudFlare use the RC4 protocol. It’s a small number, but it’s significant enough we believe we need to continue to support it for our customers.
Requests to CloudFlare sites that are still using RC4 fall into four main categories: people passing through proxies, older phones (often candy bar), other (more on that below) and a bucket of miscellaneous (like software checking for updates, old email programs and ancient operating systems).

Ciphers

Examining data for a 59 hour period last week showed that 34.4% of RC4-based requests used RC4-SHA and 63.6% used ECDHE-RSA-RC4-SHA. RC4-SHA is the oldest of those; ECDHE-RSA-RC4-SHA uses a newer elliptic curve based method of establishing an SSL connection. Either way, they both use the RC4 encryption algorithm to secure data sent across the SSL connection. We’d like to stop supporting RC4 altogether, because it is no longer believed to be secure, but continue to offer it for the small number of clients who can’t connect more securely.

If you ever need to know the details of an SSL cipher you can use the openssl ciphers command:

$ openssl ciphers -tls1 -v RC4-SHA
RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1

which shows that RC4-SHA uses RSA for key exchange, RSA for authentication, RC4 (128-bit) for encryption and SHA1 for message authentication.

Similarly,

$ openssl ciphers -tls1 -v ECDHE-RSA-RC4-SHA
ECDHE-RSA-RC4-SHA SSLv3 Kx=ECDH Au=RSA Enc=RC4(128) Mac=SHA1

shows that the same encryption, authentication and message authenitication are used as RC4-SHA but the key exchange is made using Elliptic Curve Diffie Hellman.

Inside RC4

One of the reasons RC4 is used for encryption is its speed. RC4 is a very fast encryption algorithm and it can be easily implemented on a wide variety of hardware (including phones with slow processors and even on 8-bit systems like the Arduino). After all, RC4 dates back to 1987.

The core of RC4 is the following algorithm:

i := 0
j := 0 
while GeneratingOutput:
    i := (i + 1) mod 256
    j := (j + S[i]) mod 256
    swap values of S[i] and S[j]
    K := S[(S[i] + S[j]) mod 256]
    output K
endwhile

It generates a pseudo-random stream of numbers (all 8-bit numbers in the range 0 to 255) by doing simple lookups in a table, swapping of values and addition modulo 256 (which is very, very fast). The output of that algorithm is usually combined with the message to be encrypted byte-by-byte using some fast scheme like an XOR.

The following short video shows the RC4 algorithm in action. It’s been restricted to 32 values rather than 256 to fit it nicely on the screen but the algorithm is identical. The red stream of numbers at the bottom shows the pseudo-random stream output by the algorithm. (The code for this animation is available here; thanks to @AntoineGrondin for making the animated GIF).

So, RC4 is fast, but who’s still using it? To answer that I looked at the HTTP User-Agent reported by the device connecting to CloudFlare. There were 292 unique User-Agents.

Who still uses RC4?

Firstly, lots of people using older “candy bar” style phones. Phones like the Nokia 6120 classic which was released in 2007 (and is the phone with the greatest number of RC4 requests to CloudFlare sites: 4% of the RC4-based requests in the measurement period), the Lemon T109 or the Sony Ericcson K310 which was released in 2006.

And, of course, it’s not all older telephones being used to visit CloudFlare-powered web sites. There are old browsers too. For example, we’ve seen the now ancient iCab 2.9.9 web browser (it was released in 2006) and we’ve seen it being used running on a 68k based Macintosh (last sold by Apple in 1996).

Another source of RC4-only connections is older versions of Adobe AIR. AIR is often used for games and if users don’t update the AIR runtime they can end up using the older RC4 cipher.

Yet another source is stand-alone software that makes its own SSL connection. We’ve seen some software checking update servers using RC4-secured connections. The software makes a connection to its own update server using HTTPS but the available ciphers are limited and RC4 is chosen. The command-line program curl was used to generate 1.9% of RC4-based requests to CloudFlare sites (all done with versions dating to 2009).

There’s also quite a bit of older Microsoft Internet Explorer around including Internet Explorer 5.01 (which dates back to 1999!). Here’s a breakdown of Internet Explorer versions connecting using RC4:

Looking at Windows tells a similar story of older version of the operating system (except for the presence of Windows 7 which is explained below) with lots of Windows XP out there:

I sampled connections using RC4 to see which countries they came from. The following mapping shows the distribution of RC4-based connections across the world (the darker the more RC4 was used).

From the map you can see that in Brazil, India, and across central Africa RC4 is still being used quite widely. But you’ll also notice that the coloring of the US indicates that a lot of RC4 is in use there. That seems like a surprise, and there’s an extra surprise.

Transparent SSL Proxies

Digging into the User-Agent data for the US we see the following web browser being used to access CloudFlare-powered sites using RC4:

Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/34.0.1847.137 Safari/537.36

That’s the most recent version of Google Chrome running on Windows 7 (you can see the presence in Windows 7 in the chart above). That should not be using RC4. In fact, most of the connections from Windows machines that we see using RC4 should not be (since we prioritize 3DES over RC4 for older machines).

It was initially unclear why this was happening until we looked at where the connections were coming from. They were concentrated in the US and Brazil and most seemed to be coming from IP addresses used by schools, hospitals and other large institutions.

Although the desktop machines in these locations have recent Windows and up to date browsers (which will not use RC4) the networks they are on are using SSL-based VPNs or firewalls that are performing man-in-the-middle monitoring of SSL connections.

This enables them to filter out undesirable sites, even those that are accessed using HTTPS, but it appears that the VPN/firewall software is using older cipher suites. That software likely needs updating to stop it using RC4 for secure connections.

What you can do

You can check the strength of your browser’s SSL configuration by visiting How’s My SSL. If you get a rating of “Probably Okay” then you’re good. If not make sure you have the latest browser version.

BEAST – (Browser Exploit Against SSL/TLS)

Block-based cipher suites are vulnerable to the BEAST attack. Older versions of the TLS protocol (1.0) and the SSL protocol (2.0 and 3.0) are vulnerable to the BEAST attack. To fix it you need to:
– Enable TLS 1.1 and/or TLS 1.2 on servers that support TLS 1.1 and 1.2.
– Enable TLS 1.1 and/or TLS 1.2 in Web browsers that support TLS 1.1 and 1.2.

BREACH (Browser Reconnaissance & Exfiltration via Adaptive Compression of Hypertext)

BREACH attacks are similar to the CRIME attack. Known as CVE-2013-3587, or Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) is an instance of CRIME against HTTP Compression. That is to say that CRIME attacked TLS SPDY whereas BREACH targets HTTP gzip/DEFLATE. Therefore turning off the TLS compression has no affect on BREACH as it exploits the underlying HTTP compression. The attack follows the basic steps of the CRIME attack and there are several methods to remediate the issue, such as disabling HTTP compression, protecting the application from CSRF attacks, randomising CSRF tokens per request to prevent them being captured, obfuscating the length of page responses by adding random amounts of arbitrary bytes to the response.
Web Server: Turn off compression for pages that include PII (Personally Identifiable Information).
Web Browser: Force browser not to invite HTTP compression use.
Web Applications:
– Consider moving to Cipher AES128.
– Remove compression support on dynamic content.
– Reduce secrets in response buddies.
– Use rate-limiting requests.

CRIME (Compression Ratio Info-leak Made Easy)

The Transport Layer Security (TLS) protocol contains a feature that allows you to compress the data passed between the server and the Web browser. TLS data compression is susceptible to the CRIME exploit. CRIME targets cookies over connections that use HTTPS protocol (Hyper Text Transfer Protocol with Secure Sockets Layer) and SPDY protocol (Web protocol from Google), which employ TLS data compression.
In a CRIME attack, the attacker recovers the content of secret authentication cookies and uses this information to hijack an authenticated web session. The attacker uses a combination of plaintext injection and TLS compression data leakage to exploit the vulnerability. The attacker lures the Web browser to make several connections to the website. The attacker than compares the size of the ciphertexts sent by the browser during each exchange to determine parts of the encrypted communication and hijack the session.
If you are vulnerable to the CRIME attack, the highest grade you can receive is a C. To resolve it:
– Disable server (website) TLS data compression and Web browser TLS data compression.
– Modify gzip to allow for explicit separation of compression contexts in SPDY.

FREAK (Factoring Attack on RSA-EXPORT Keys)

A team of researchers revealed that the old export-grade cryptographic suites are still being used today. Servers that support RSA export cipher suites could allow a man-in-the-middle (MITM) to trick clients, who support the weak cipher suites, into using these weak 40- and/or 56-bit export cipher suites to downgrade their connection. The MITM can then use today’s computing power to crack those keys in just a few hours.

To fall into this attack:
Server: The server must support RSA export cipher suites.
AND
Client: The client must do one of the following: (1) must offer an RSA export suite, or (2) must be using Apple SecureTransport, or (3) must be using a vulnerable version of OpenSSL, or (4) must be using Secure Channel (Schannel).

To fix it:
Server-Side: Disable support for all export-grade cipher suites on your servers. We also recommend that you disable support for all known insecure ciphers (not just RSA export ciphers), disable support for ciphers with 40- and 56-bit encryption, and enable forward secrecy.
Client-Side: Vulnerable clients include software that rely on OpenSSL or Apple’s Secure Transport (i.e., Chrome, Safari, Opera, the Android and the BlackBerry stock browsers), or Windows Secure Channel/Schannel (i.e. Internet Explorer). FREAK vulnerability patched in latest OpenSSL Release.

Heartbleed Bug

The cryptographic libraries in OpenSSL versions 1.0.1 through 1.0.1f and 1.0.2-beta1 are vulnerable to the Heartbleed Bug attack.
Solution is to Patch your software. Upgrade to the latest version of OpenSSL (version 1.0.1g or later).

Insecure TLS Renegotiation

The Secure Socket Layer (SSL)/Transport Layer Security (TLS) protocol contains a session renegotiation feature that permits a server and client to use their connection to establish new parameters and generate new keys during the session.
To fix it: Make sure your servers support secure negotiations by running current versions of the SSL/TLS protocol. Servers should not be running SSL 2.0, an outdated version of SSL that has known vulnerabilities.

Logjam Attack

Researchers found that old DHE export-grade cryptographic suites are still being used. They also discovered that servers with support for these DHE_EXPORT cipher suites enabled could allow a man-in-the-middle (MITM) to trick clients that support the weak DHE_EXPORT cipher suites into downgrading their connection to a 512-bit key exchange.

To fix it:
– Disable support for all DHE_EXPORT cipher suites on your servers.
– Generate a strong Diffie-Hellman Parameter, minimum 2048-bits.

BlueKeep (then BlueKeep II, III, IV, V aka DejaBlue)

BlueKeep (CVE-2019-0708) is a security vulnerability that was discovered in Microsoft’s Remote Desktop Protocol, which allows for the possibility of remote code execution. Identified May2019
Then few weeks later further flaws in RDP were found – (CVE-2019-1181 and CVE-2019-1182)
The RDP protocol uses “virtual channels”, configured pre-authentication, as a data path between the client and server for providing extensions. RDP 5.1 defines 32 “static” virtual channels, and “dynamic” virtual channels are contained within one of these static channels. If a server binds the virtual channel “MS_T120” (a channel for which there is no legitimate reason for a client to connect to) with a static channel other than 31, heap corruption occurs that allows for arbitrary code execution at the system level

How to Prevent and Fix BlueKeep

  • First and foremost action is to patch all of your Windows machines (user workstations, servers etc) with the patches released by Microsoft. Even non-supported versions (XP, Vista, 2003) have patches available here.
  • Block RDP port 3389 if not needed (using a network firewall or even the Windows firewall). Especially if port 3389 is accessible from the Internet, this is a huge mistake and you must either block it immediately or patch the system. Currently there are around 1 million unpatched windows machines on the Internet with exposed RDP port.
  • Enable Network Level Authentication (NLA) for RDP connections. NLA requires authentication therefore a possible worm will not be able to propagate to machines having NLA.
  • Similar to point 2 above, disabling Remote Desktop service (if it’s not required) will help to mitigate the issue.

To fix it: KEEP CALM AND START PATCHING – DO NOT RUSH but plan for it!
Permit me to remind you that BlueKeep itself hasn’t been reliably exploited. The threat is real, but it’s not viral or immediate.
Microsoft released patches for the vulnerability on 14 May 2019m valid up to WS2012
First, focus on patching externally facing RDP servers, then move on to critical servers such as domain controllers and management servers. Finally patch non-critical servers that have RDP enabled, along with the rest of the desktop estate. You can find more information on applying the patch from Microsoft’s support pages

RC4 Cipher Enabled

The BEAST attack was discovered in 2011. The solution to mitigating the attack is to enable TLS 1.1 and TLS 1.2 on servers and in browsers. Because RC4 is easy to implement and because of the BEAST attack workaround, the RC4 stream cipher’s use is widespread. One estimate says that RC4 may account for up to 50% of all TLS traffic. (back in 2014)
To fix it:
– Enable TLS 1.2 on servers that support TLS 1.2 and switch to AEAD cipher suites (i.e. AES-GCM).
– Enable TLS 1.2 in browsers that support TLS 1.2.

SSL 2.0 Protocol Enabled

“SSL 2.0 is an outdated protocol version with known vulnerabilities.”
SSL 2.0 protocol was disavowed in 1996 due to known security flaws, some servers are still using it. Servers still using the SSL 2.0 protocol should disable it.

SSL 3.0 Protocol Enabled

While the SSL 3.0 protocol is enabled, a MITM (man-in-middle-attack) can intercept encrypted connections and calculate the plaintext of the intercepted connections.
The most effective way to counter the POODLE attack is to disable the SSL 3.0 protocol.
Disable the SSL 3.0 protocol on the server and enable TLS1.1, and TLS1.2.

TLS 1.0 and TLS 1.1 Protocol Enabled

It’s Time to Disable TLS 1.0 (2019 onwards). Unless you need to support legacy browsers, you should also disable TLS 1.0 and TLS 1.1
Microsoft recommends customers get ahead of this issue by removing TLS 1.0 dependencies in their environments and disabling TLS 1.0 at the operating system level where possible (More details at https://docs.microsoft.com/en-us/security/solving-tls1-problem)
If you disable TLS 1.0 and TLS 1.1, the following user agents and their older versions will likely be affected (specific user agent versions on different operating systems may vary).

  • Android 4.3
  • Chrome 29
  • Firefox 26
  • Internet Explorer 10
  • Java 6u45, 7u25
  • OpenSSL 0.9.8y
  • Safari 6.0

SSL/TLS Best Practices (2019+)

Use Secure Protocols – TLS 1.1/1.2/1.3
Use at least 2048-Bit Private Keys
Protect Private Keys
Ensure Sufficient Hostname Coverage
Obtain Certificates from a Reliable CA
Use Strong Certificate Signature Algorithms – SHA256
Use Complete Certificate Chains
Use Secure Cipher Suites:

  • Anonymous Diffie-Hellman (ADH) suites do not provide authentication.
  • Cipher suites with a “NULL” only offer integrity check. They do not offer data encryption and are not secure for most usages.
  • Export cipher suites are insecure when negotiated in a connection, but they can also be used against a server that prefers stronger suites (the FREAK attack).
  • Suites with weak ciphers (typically of 40 and 56 bits) use encryption that can easily be broken.
  • RC4 is insecure.
  • 3DES is slow and weak.

Select Best Cipher Suites
Use Forward Secrecy
Use Strong Key Exchange
Mitigate Known Problems
Use Session Resumption – a performance-optimization technique
Avoid Too Much Security
Use WAN Optimization and HTTP/2
Cache Public Content
Use OCSP Stapling
Use Fast Cryptographic Primitives
HTTP and Application Security
Encrypt Everything
Eliminate Mixed Content
Understand and Acknowledge Third-Party Trust
Use Secure Cookies – Secure Cookie responce header
Secure HTTP Compression
Deploy HTTP Strict Transport Security – HSTS responce header
Deploy Content Security Policy – CSP header
Do Not Cache Sensitive Content
Consider Other Threats
Public Key Pinning (advanced)
DNSSEC and DANE (advanced)

Security Protocol Support by OS Version

Browsers

BROWSERSTLS 1.0TLS 1.1TLS 1.2TLS 1.3
Mobile IE version 10 and below
Desktop IE versions 7 and below
Desktop IE versions: 8, 9, and 10PartialPartial
Desktop and mobile IE version 11
Microsoft Edge
Mozilla Firefox 22 and below
Mozilla Firefox 23 to 26PartialPartial
Mozilla Firefox 27 and higher
Google Chrome 21 and below
Google Chrome 22 to 37PartialPartial
Google Chrome 38 and higher
Android 4.3 (Jelly Bean) and below
Android 4.4 (Kitkat) to 4.4.4PartialPartial
Android 5.0 (Lollipop) and higher
Mobile Safari for iOS 4 and below
Mobile Safari versions 5 and higher for iOS 5 and higher
Desktop Safari versions 6 for OS X 10.8 (Mountain Lion)
Desktop Safari versions 7+ for OS X 10.9 (Mavericks)+

Desktop clients

Desktop ClientsTLS 1.0TLS 1.1TLS 1.2TLS 1.3
Windows XP
Windows XP SP3
Windows Vista
Windows 7 SP1
Windows 8PartialPartial
Windows 8.1
Windows 10
MAC OS X 10.2 and 10.3
MAC OS X 10.4 and 10.5
MAC OS X 10.6 and 10.7
MAC OS X 10.8
MAC OS X 10.9
MAC OS X 10.10
MAC OS X 10.11
MAC OS X 10.12
MAC OS X 10.13
Linux

Mobile Clients

Mobile ClientsTLS 1.0TLS 1.1TLS 1.2TLS 1.3
AirwatchPartial
Android versions: 1.0 to 4.4.4
Android versions: 5.0 to 8.1 and Android P
iPhone OS versions: 1, 2, 3, and 4
iPhone OS versions: 5, 6, 7, 8, 9, 10, and 11
MobileIron Core versions 9.4 and below
MobileIron Core versions 9.5 and higher
MobileIron Cloud
Windows Phone versions: 7, 7.5, 7.8 and 8
Windows Phone version 8.1
Windows 10 Mobile versions: v1511, v1607, v1703, and v1709

Servers

ServersTLS 1.0TLS 1.1TLS 1.2TLS 1.3
Windows Server 2003
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012PartialPartial
Windows Server 2012 R2
Windows Server 2016

Libraries

LibrariesTLS 1.0TLS 1.1TLS 1.2TLS 1.3
.NET 4.6 and higher
.NET 4.5 to 4.5.2PartialPartial
.NET 4.0Partial
.NET 3.5 and below
OpenSSL versions: 1.0.0 and below
OpenSSL versions: 1.0.1 and higher
Mozilla – NSS versions: 3.13.6 and below
Mozilla – NSS versions: 3.14 to 3.15
Mozilla – NSS versions: 3.15.1 and higher

Even more links

Links:
https://www.gracefulsecurity.com/tls-ssl-vulnerabilities/
https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/
https://www.acunetix.com/blog/articles/tls-ssl-cipher-hardening/
https://support.globalsign.com/customer/portal/articles/2934392-tls-protocol-compatibility
https://www.ssl.com/article/tls-1-3-is-here-to-stay/