Implementing HTTP/2 with mbed TLS

I’ve recently been exploring mbed TLS and thought I’d share some numbers I’ve found.

First, the specifics:

  • mbed TLS version: 1.3.10
  • Compiler: arm-none-eabi-gcc (GNU Tools for ARM Embedded Processors) 4.9.3 20141119 (release) [ARM/embedded-4_9-branch revision 218278]
  • Processor: TI CC3200 (ARM Cortex-M4 core)

As I’ve been primarily focused on an HTTP/2 server here, I configured mbed TLS to support the mandatory to implement (MTI) ciphersuite for HTTP/2, TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256, along with various required features such as SNI, ALPN, and X.509 certificates. This resulted in a config.h file with the following items #define’ed:

  • POLARSSL_HAVE_LONGLONG
  • POLARSSL_HAVE_ASM
  • POLARSSL_HAVE_IPV6
  • POLARSSL_PLATFORM_PRINTF_ALT
  • POLARSSL_PLATFORM_FPRINTF_ALT
  • POLARSSL_REMOVE_ARC4_CIPHERSUITES
  • POLARSSL_ECP_DP_SECP256R1_ENABLED
  • POLARSSL_KEY_EXCHANGE_ECDHE_RSA_ENABLED
  • POLARSSL_NO_PLATFORM_ENTROPY
  • POLARSSL_PKCS1_V15
  • POLARSSL_SSL_EXTENDED_MASTER_SECRET
  • POLARSSL_SSL_DISABLE_RENEGOTIATION
  • POLARSSL_SSL_MAX_FRAGMENT_LENGTH
  • POLARSSL_SSL_PROTO_TLS1_2
  • POLARSSL_SSL_ALPN
  • POLARSSL_SSL_SERVER_NAME_INDICATION
  • POLARSSL_SSL_TRUNCATED_HMAC
  • POLARSSL_SSL_SET_CURVES
  • POLARSSL_X509_CHECK_KEY_USAGE
  • POLARSSL_X509_CHECK_EXTENDED_KEY_USAGE
  • POLARSSL_AES_C
  • POLARSSL_ASN1_PARSE_C
  • POLARSSL_BIGNUM_C
  • POLARSSL_CIPHER_C
  • POLARSSL_CTR_DRBG_C
  • POLARSSL_ECDH_C
  • POLARSSL_ECP_C
  • POLARSSL_ENTROPY_C
  • POLARSSL_GCM_C
  • POLARSSL_MD_C
  • POLARSSL_OID_C
  • POLARSSL_PK_C
  • POLARSSL_PK_PARSE_C
  • POLARSSL_PLATFORM_C
  • POLARSSL_RSA_C
  • POLARSSL_SHA256_C
  • POLARSSL_SSL_CACHE_C
  • POLARSSL_SSL_SRV_C
  • POLARSSL_SSL_TLS_C
  • POLARSSL_X509_USE_C
  • POLARSSL_X509_CRT_PARSE_C
  • SSL_CIPHERSUITES TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

With this config.h in place, I executed the following command:
CC=arm-none-eabi-gcc AR=arm-none-eabi-ar CFLAGS+=”-mthumb -mcpu=cortex-m4 -ffunction-sections -fdata-sections” make lib

This resulted in a static library (libmbedtls.a) with a size of 238972 bytes. Keep in mind that this is doing everything in software (AES, SHA, ECC, etc.).

One trick I learned along the way: It’s best to store your certificates and keys in DER format — no PEM. This allows you to remove POLARSSL_PEM_PARSE_C and POLARSSL_BASE64_C. With this trick, I went from a static library (libmbedtls.a) with a size of 243536 bytes to one with a size of 238972 bytes. This method also reduces size of the certificates and keys themselves.

If you have any optimizations or findings with mbed TLS, particularly for HTTP/2, please share in the comments!

Open Source Software — Who Actually Reviews the Code?

This post is co-authored by Robby Simpson and Sven Krasser. So, you can find it on both Robby’s and Sven’s blogs — you should check them both out!

Last year saw a large number of critical bugs in open source software (OSS). These bugs received a lot of media attention and re-opened the discussion of bugs and security in OSS. This has led many to question whether ESR’s famous statement that “Given enough eyeballs, all bugs are shallow” holds true.

There are two aspects to consider here: first, does bug discovery parallelize well? Particularly for subtle security-related bugs, a large number of users does not necessarily aid discovery, but rather a dedicated review effort may be required. We’ll leave this aspect for a separate discussion…

Second, are there actually more eyeballs looking at open source software? For that question, we have some data to contribute to the discussion. In 2003, Robby released NETI@home, a project to gather network performance metrics from endhosts, as part of his PhD dissertation work at Georgia Tech. You can find the source code on SourceForge. The NETI@home agent runs on end user machines and gathers various network performance data (e.g., numbers of flows per protocol, number of packets per flow, TCP window size). Such data has many uses, including improving models in network simulations and observing suspicious traffic patterns.

A driving factor for releasing NETI@home as OSS stems from the fact that it gathers a lot of information that could raise privacy concerns with users. The most forthcoming way to address these concerns, which could hinder adoption, is to allow users to actually read the code. And as researchers that piqued our curiosity — how many users have these concerns and will further review the source code?

How could we measure this user code review? Download stats for the source code are an option, but they don’t tell us much about users actually looking at the code. Instead, we placed a comment in the section of code where privacy preferences are honored, which reads:

/*
* You have found the "hid-
* den message!" Please visit
* http://www.neti.gatech.edu/sec/sec.html
* and log in as user 'neti'
* and pw 'hobbit'
*/

The web page mentioned in this comment contained an explanation along with an email address (it was taken down around 2009). Visiting such a link requires a lower threshold than sending an email, so we were looking for both pageviews of that page along with emails to the address given on that page. The former would have told us someone found the comment while the latter would confirm that someone would have taken action on it. However, we didn’t receive any pageviews (and therefore we didn’t receive any emails either).

To put this into perspective with NETI@home’s user base, there were about 13,000 downloads of the software, and there were about 4,500 active users that ran the agent. We can safely say that the type of user running the software falls into the geek category, so there’s some expected selection bias with respect to taking an interest in the source code.

Granted, this is slightly different than contributors to an open source project reviewing code. Nonetheless, it came as a surprise to us, and it certainly went against the conventional wisdom at the time.

As fans of OSS, we were both disappointed in the results. However, we hope that sharing this data point will add to the larger discussion, help strengthen the open source community, and show the need for dedicated code review.

We would love to hear your thoughts on this study, the effectiveness of OSS, and how to improve these shortcomings. Feel free to leave comments below!