HTTP/1.1 has become synonymous with “HTTP/1” because it made several steps towards enabling the scale that the web was starting to experience in the late 1990s. It took the foundational concepts we explored in HTTP/0.9 and HTTP/1.0 and made several advancements and adjustments that enable the web to scale for almost two decades.

While newer protocols like HTTP/2 and HTTP/3 have since arrived with their own improvements, HTTP/1.1 remains a non-negotiable requirement… well, almost. There are a few who believe that it is time to kill HTTP/1.1 and some believe it would immediately reduce bot traffic. It has historically been the default transport for the web and the protocol that servers and clients fall back on. Its simplicity and power are why, even now, a massive portion of internet traffic flows over HTTP/1.1.

Let’s start looking at the new features in HTTP/1.1 that made it a such a sturdy pillar of the web for so long and then build a server in Go that implements them from scratch.

Improvements over HTTP/1.0

HTTP/1.1 wasn’t just a bump in the version number; it was a targeted response to the scaling bottlenecks of the 90s. It introduced specific features to cut down on connection overhead and handle dynamic data more efficiently than 1.0 ever could.

Persistent Connections (Keep-Alive)

This is probably the most important performance improvement in HTTP/1.1. In HTTP/1.0, every single request required a new, separate TCP connection. Setting up a TCP connection is a multi-step handshake process that introduces significant latency. For a typical website that requires dozens of resources (CSS, JavaScript, images), this overhead added up quickly. And this was before the web was encrypted, so wrapping requests in TLS for “secure pages” would add two more round trips too this.

HTTP/1.1 introduced persistent connections by default. This allows the browser to send multiple requests over a single TCP connection, eliminating the repeated connection setup cost.

A client or server can signal that they wish to close the connection after a request by sending the Connection: close header. Otherwise, the connection is assumed to be “kept alive”.

The Host Header: Enabling Virtual Hosting

In the early web, a server at a specific IP address hosted a single website. As the web grew, this became incredibly inefficient. The Host header, which became mandatory in HTTP/1.1, solved this. It specifies the domain name of the server the client is trying to reach. This allowed a single server (with a single IP address) to host hundreds or thousands of different websites, a practice known as virtual hosting. This was a critical innovation for the economic scalability of web hosting.

Chunked Transfer Encoding

Before HTTP/1.1, for a server to send a response, it had to know its exact size beforehand to set the Content-Length header. This was fine for static files but a major problem for dynamically generated content. What if you were streaming a large video or generating a big HTML page on the fly? You’d have to buffer the entire response in memory just to calculate its size.

Chunked Transfer Encoding elegantly solves this. The server can send the response body in a series of “chunks.” Each chunk is prefixed with its size in hexadecimal, followed by the chunk data itself. The stream is terminated by a final chunk of size 0.

Here’s what a chunked response looks like on the wire:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked

8\r\n
kmcd.dev\r\n
12\r\n
 is awesome!\r\n
0\r\n
\r\n

This allowed for much more efficient handling of dynamic content and laid the groundwork for streaming media.

Modern Caching with Cache-Control

HTTP/1.0 had basic caching headers, but HTTP/1.1 introduced the powerful Cache-Control header. This gave developers fine-grained control over how browsers and intermediate proxies cache resources. Directives like max-age, public, private, no-cache, and no-store allowed for sophisticated caching strategies, dramatically reducing bandwidth usage and improving load times.

A quick look at my own site’s headers shows this in action:

$ curl --http1.1 -I https://kmcd.dev
HTTP/1.1 200 OK
Date: Sat, 24 Jan 2026 10:12:55 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Cache-Control: max-age=31536000, public
Server: cloudflare

This tells any browser or CDN that it’s safe to cache this response for a full year, which is great for performance.

More Methods for RESTful APIs

HTTP/1.0 was primarily about GET, POST, and HEAD. HTTP/1.1 expanded the vocabulary of the web by adding new methods that were crucial for the development of RESTful APIs:

  • PUT: Create or replace a resource at a given URI.
  • PATCH: Partially modify a resource.
  • DELETE: Delete a resource at a given URI.
  • CONNECT: Used for creating tunnels, most notably for HTTPS through proxies.
  • OPTIONS: Describe the communication options for the target resource.
  • TRACE: Performs a message loop-back test along the path to the target resource.

100 Continue Status Code

When a client needs to send a large request body (like uploading a file), it can be inefficient to send the entire payload only for the server to reject it (e.g., due to authentication failure or size limits). The 100 Continue status code provides a solution.

The client can send the request headers with the Expect: 100-continue header and then wait.

PUT /images HTTP/1.1
Host: images.example.com
Content-Type: image/png
Content-Length: 500000
Expect: 100-continue

If the server is willing to accept the request, it responds with HTTP/1.1 100 Continue. The client then proceeds to send the request body. If the server is not going to accept it, it can immediately send a final error code like 413 Payload Too Large, and the client knows not to waste bandwidth sending the body.

Building a Simple HTTP/1.1 Server in Go

Now for the fun part. Let’s build a server that understands these new features. We’ll be pulling from the complete server code found here: main.go.

Handling Persistent Connections

To support keep-alive, our connection handler can’t just handle one request and then close the connection. It needs to loop, processing multiple requests on the same connection until the client or server decides to close it.

The structure of our server looks like this: ListenAndServe accepts new TCP connections and spins up a handleConnection goroutine for each one.

// ListenAndServe starts the server.
func (s *Server) ListenAndServe() error {
	// ... listener setup ...
	for {
		conn, err := l.Accept()
		if err != nil {
			return err
		}

		go func() {
			if err := s.handleConnection(conn); err != nil {
				slog.Error(fmt.Sprintf("http error: %s", err))
			}
		}()
	}
}

Now let’s look at the new handleConnection method. It contains an infinite loop that repeatedly calls handleRequest. Previously, this method didn’t exist because in HTTP/1.0 we only ever handled a single request per connection. The loop only exits if there’s a fatal error (timeout, protocol error, etc.) or if handleRequest signals that the connection should be closed like when the user sends a request with the header Connection: close.

func (s *Server) handleConnection(conn net.Conn) error {
	defer conn.Close()
	for {
		// handleRequest does the work of reading and responding
		shouldClose, err := s.handleRequest(conn)
		if err != nil {
			// io.EOF is a normal way for a persistent connection to end.
			if errors.Is(err, io.EOF) {
				return nil
			}
			return err
		}
		if shouldClose {
			return nil // Client requested a close, so we exit the loop.
		}
	}
}

Inside handleRequest, we determine if the connection should be closed by inspecting the Connection header.

// Default to keeping the connection alive for HTTP/1.1
req.Close = false 

// Check if the client or a previous handler wants to close the connection.
switch strings.ToLower(req.Header.Get("Connection")) {
case "close":
    req.Close = true
case "keep-alive":
	// This is the default, but we handle it explicitly.
    req.Close = false
}

Requiring the Host Header

This is a simple but crucial part of our server. After parsing the headers, we just check for the presence of the Host header. If it’s missing, we return an error and close the connection.

if _, ok := req.Header["Host"]; !ok {
    // We send an error response here in a real implementation.
    return true, errors.New("required 'Host' header not found")
}

Handling Chunked Bodies

This is the most complex part. We need to be able to both read a chunked request body from a client and send a chunked response.

Reading a Chunked Request

When we parse the request headers, if we see Transfer-Encoding: chunked, we know we can’t just use io.LimitReader with Content-Length. Instead, we need a special reader. Our chunkedBodyReader does just this.

type chunkedBodyReader struct {
	reader *bufio.Reader
	n      int64 // bytes left in current chunk
	err    error
}

func (r *chunkedBodyReader) Read(p []byte) (n int, err error) {
	if r.err != nil {
		return 0, r.err
	}
	// Do we need to read the size of the next chunk?
	if r.n == 0 {
		r.n, r.err = r.readChunkSize()
		if r.err != nil {
			return 0, r.err
		}
	}
	// If the next chunk size is 0, we're at the end.
	if r.n == 0 {
		return 0, io.EOF
	}
    // ... logic to read from the current chunk ...
}

The readChunkSize method is responsible for reading a line, parsing the hexadecimal chunk size, and preparing the reader to consume that many bytes.

Sending a Chunked Response

On the response side, things are even cooler. If our http.ResponseWriter implementation doesn’t have a Content-Length set when WriteHeader is called, we can automatically switch to using chunked encoding.

Our responseBodyWriter checks for this condition.

func (r *responseBodyWriter) writeHeader(conn io.Writer, proto string, headers http.Header, statusCode int) error {
	_, clSet := r.headers["Content-Length"]
	_, teSet := r.headers["Transfer-Encoding"]
	// If no length is set, we decide to use chunking.
	if !clSet && !teSet {
		r.chunkedEncoding = true
		r.headers.Set("Transfer-Encoding", "chunked")
	}
    // ... write headers ...
}

Then, the Write method, if chunkedEncoding is true, will write each chunk with the required formatting.

func (r *responseBodyWriter) Write(b []byte) (int, error) {
	// ... ensure headers are written ...

	if r.chunkedEncoding {
		// Write the chunk size in hex, followed by \r\n
		chunkSize := fmt.Sprintf("%x\r\n", len(b))
		if _, err := r.conn.Write([]byte(chunkSize)); err != nil {
			return 0, err
		}
	}

	// Write the actual chunk data
	n, err := r.conn.Write(b)
	if err != nil {
		return n, err
	}

	if r.chunkedEncoding {
		// Write the trailing \r\n for the chunk
		if _, err := r.conn.Write(nlcf); err != nil {
			return n, err
		}
	}

	return n, nil
}

Finally, after the last Write call, we send the terminal 0\r\n\r\n chunk to signal the end of the response.

Testing Our Server

Using command-line tools is the best way to see these features in action.

Testing Keep-Alive

We can use curl’s verbose mode (-v) to see connection reuse. We’ll make two requests to our server on the same command line.

$ curl --http1.1 -v http://127.0.0.1:9000/headers http://127.0.0.1:9000/status/204

*   Trying 127.0.0.1:9000...
* Connected to 127.0.0.1 (127.0.0.1) port 9000 (#0)
> GET /headers HTTP/1.1
> Host: 127.0.0.1:9000
> ...

< HTTP/1.1 200 OK
< Connection: keep-alive
< ...
* Connection #0 to host 127.0.0.1 left intact
* Found bundle for host 127.0.0.1: 0x1400084c0 [can pipeline]
* Re-using existing connection! (#0) with host 127.0.0.1
> GET /status/204 HTTP/1.1
> Host: 127.0.0.1:9000
> ...

< HTTP/1.1 204 No Content
< Connection: keep-alive
< ...
* Connection #0 to host 127.0.0.1 left intact

The key lines are Re-using existing connection! and Connection #0 to host 127.0.0.1 left intact, which confirm that the second request was sent over the same connection as the first.

Testing the Host Header

Using netcat (nc), we can manually craft an HTTP request. First, let’s try one without a Host header.

$ printf "GET / HTTP/1.1\r\n\r\n" | nc localhost 9000

The command returns nothing, and on the server side, we see our error log:

2026/01/24 19:34:39 ERROR http error: required 'Host' header not found

Success! Now let’s add the Host header.

$ printf "GET /headers HTTP/1.1\r\nHost: localhost\r\n\r\n" | nc localhost 9000
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json
Transfer-Encoding: chunked

61
{"Accept-Encoding":["gzip"],"Host":["localhost"],"User-Agent":["Go-http-client/1.1"]}
0

It works perfectly, and we even get a chunked response back!

Testing Chunked Encoding

Our server has an /echo/chunked endpoint that streams the request body right back to the response. We can use curl to send a chunked request to it.

# We send two chunks: "hello" and " world"
$ (
  printf "5\r\nhello\r\n";
  sleep 1;
  printf "6\r\n world\r\n";
  sleep 1;
  printf "0\r\n\r\n";
) | curl --http1.1 -X POST --header "Transfer-Encoding: chunked" -T - http://127.0.0.1:9000/echo/chunked

hello world

The command pipes a manually created chunked body into curl. curl sends it to our server, which reads it using chunkedBodyReader and writes it back using our chunked responseBodyWriter. The final output hello world confirms the whole process worked.

Conclusion

HTTP/1.1 was and still is an amazing protocol. It introduced connections re-use, virtual hosting, and streaming request and response bodies. The design choices made in HTTP/1.1 were so robust that they remain deeply embedded in the internet’s infrastructure today.

If HTTP/1.1 was so great, why was HTTP/2 created? And what’s the deal with HTTP/3? Stay tuned for the next post in this series where we start looking at HTTP/2.

See all of the code mentioned in this article here:

go/main.go (click to expand)View on GitHub
package main

import (
	"bufio"
	"bytes"
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"io"
	"log"
	"log/slog"
	"math"
	"net"
	"net/http"
	"net/url"
	"strconv"
	"strings"
)

var nlcf = []byte{0x0d, 0x0a}

// Server is a simple HTTP/1.1 server.
type Server struct {
	Addr    string
	Handler http.Handler
}

// ListenAndServe starts the server.
func (s *Server) ListenAndServe() error {
	handler := s.Handler
	if handler == nil {
		handler = http.DefaultServeMux
	}
	l, err := net.Listen("tcp", s.Addr)
	if err != nil {
		return err
	}
	defer l.Close()

	for {
		conn, err := l.Accept()
		if err != nil {
			return err
		}

		go func() {
			if err := s.handleConnection(conn); err != nil {
				slog.Error(fmt.Sprintf("http error: %s", err))
			}
		}()
	}
}

func (s *Server) handleConnection(conn net.Conn) error {
	defer conn.Close()
	for {
		shouldClose, err := s.handleRequest(conn)
		if err != nil {
			if errors.Is(err, io.EOF) {
				return nil
			}
			return err
		}
		if shouldClose {
			return nil
		}
	}
}

func (s *Server) handleRequest(conn net.Conn) (bool, error) {
	// Limit headers to 1MB
	limitReader := io.LimitReader(conn, 1*1024*1024).(*io.LimitedReader)
	reader := bufio.NewReader(limitReader)

	reqLineBytes, _, err := reader.ReadLine()
	if err != nil {
		return true, fmt.Errorf("read request line error: %w", err)
	}
	reqLine := string(reqLineBytes)

	req := new(http.Request)
	var found bool

	req.Method, reqLine, found = strings.Cut(reqLine, " ")
	if !found {
		return true, errors.New("invalid method")
	}
	if !methodValid(req.Method) {
		return true, errors.New("invalid method")
	}

	req.RequestURI, reqLine, found = strings.Cut(reqLine, " ")
	if !found {
		return true, errors.New("invalid path")
	}
	if req.URL, err = url.ParseRequestURI(req.RequestURI); err != nil {
		return true, fmt.Errorf("invalid path: %w", err)
	}

	req.Proto = reqLine
	req.ProtoMajor, req.ProtoMinor, found = parseProtocol(req.Proto)
	if !found {
		return true, errors.New("invalid protocol")
	}

	req.Header = make(http.Header)
	for {
		line, _, err := reader.ReadLine()
		if err != nil && err != io.EOF {
			return true, err
		} else if err != nil {
			break
		}
		if len(line) == 0 {
			break
		}

		k, v, ok := bytes.Cut(line, []byte{':'})
		if !ok {
			return true, errors.New("invalid header")
		}
		req.Header.Add(strings.ToLower(string(k)), strings.TrimLeft(string(v), " "))
	}

	if _, ok := req.Header["Host"]; !ok {
		return true, errors.New("required 'Host' header not found")
	}

	switch strings.ToLower(req.Header.Get("Connection")) {
	case "keep-alive", "":
		req.Close = false
	case "close":
		req.Close = true
	}

	limitReader.N = math.MaxInt64

	ctx := context.Background()
	ctx = context.WithValue(ctx, http.LocalAddrContextKey, conn.LocalAddr())
	ctx, cancelCtx := context.WithCancel(ctx)
	defer cancelCtx()
	contentLength, err := parseContentLength(req.Header.Get("Content-Length"))
	if err != nil {
		return true, err
	}
	req.ContentLength = contentLength
	isChunked := req.Header.Get("Transfer-Encoding") == "chunked"
	if req.ContentLength == 0 && !isChunked {
		req.Body = noBody{}
	} else {
		if isChunked {
			req.Body = &chunkedBodyReader{
				reader: reader,
			}
		} else {
			req.Body = &bodyReader{
				reader: io.LimitReader(reader, req.ContentLength),
			}
		}
	}

	req.RemoteAddr = conn.RemoteAddr().String()

	w := &responseBodyWriter{
		req:     req,
		conn:    conn,
		headers: make(http.Header),
	}

	s.Handler.ServeHTTP(w, req.WithContext(ctx))
	if err := w.flush(); err != nil {
		return true, nil
	}
	return req.Close, nil
}

type noBody struct{}

func (noBody) Read([]byte) (int, error) { return 0, io.EOF }
func (noBody) Close() error             { return nil }

func parseContentLength(headerval string) (int64, error) {
	if headerval == "" {
		return 0, nil
	}
	return strconv.ParseInt(headerval, 10, 64)
}

func parseProtocol(proto string) (int, int, bool) {
	switch proto {
	case "HTTP/1.0":
		return 1, 0, true
	case "HTTP/1.1":
		return 1, 1, true
	}
	return 0, 0, false
}

func methodValid(method string) bool {
	switch method {
	case http.MethodGet, http.MethodHead, http.MethodPost, http.MethodPut, http.MethodPatch, http.MethodDelete, http.MethodConnect, http.MethodOptions, http.MethodTrace:
		return true
	}
	return false
}

type bodyReader struct {
	reader io.Reader
}

func (r *bodyReader) Read(p []byte) (n int, err error) {
	return r.reader.Read(p)
}

func (r *bodyReader) Close() error {
	_, err := io.Copy(io.Discard, r.reader)
	return err
}

type chunkedBodyReader struct {
	reader *bufio.Reader
	n      int64 // bytes left in current chunk
	err    error
}

func (r *chunkedBodyReader) Read(p []byte) (n int, err error) {
	if r.err != nil {
		return 0, r.err
	}
	if r.n == 0 {
		r.n, r.err = r.readChunkSize()
		if r.err != nil {
			return 0, r.err
		}
	}
	if r.n == 0 {
		return 0, io.EOF
	}
	if int64(len(p)) > r.n {
		p = p[0:r.n]
	}
	n, err = r.reader.Read(p)
	r.n -= int64(n)
	if r.n == 0 && err == nil {
		// Read trailing \r\n
		b, err := r.reader.ReadByte()
		if err != nil {
			r.err = err
			return n, err
		}
		if b != '\r' {
			r.err = errors.New("missing \r after chunk")
			return n, r.err
		}
		b, err = r.reader.ReadByte()
		if err != nil {
			r.err = err
			return n, err
		}
		if b != '\n' {
			r.err = errors.New("missing \n after chunk")
			return n, r.err
		}
	}
	r.err = err
	return n, err
}

func (r *chunkedBodyReader) readChunkSize() (int64, error) {
	line, err := r.readLine()
	if err != nil {
		return 0, err
	}
	// chunkSize is hex
	n, err := strconv.ParseInt(strings.TrimSpace(string(line)), 16, 64)
	if err != nil {
		return 0, err
	}
	if n == 0 {
		// Read trailers
		for {
			line, err := r.readLine()
			if err != nil {
				return 0, err
			}
			if len(line) == 0 {
				break
			}
		}
	}
	return n, nil
}

func (r *chunkedBodyReader) readLine() (string, error) {
	var line []byte
	for {
		b, err := r.reader.ReadByte()
		if err != nil {
			return "", err
		}
		if b == '\n' {
			break
		}
		line = append(line, b)
	}
	return strings.TrimRight(string(line), "\r"), nil
}

func (r *chunkedBodyReader) Close() error {
	_, err := io.Copy(io.Discard, r)
	return err
}

type responseBodyWriter struct {
	req             *http.Request
	conn            net.Conn
	sentHeaders     bool
	headers         http.Header
	chunkedEncoding bool
	bodyBuffer      *bytes.Buffer
}

func (r *responseBodyWriter) Header() http.Header {
	return r.headers
}

func (r *responseBodyWriter) Write(b []byte) (int, error) {
	if !r.sentHeaders {
		if r.headers.Get("Content-Type") == "" {
			r.headers.Set("Content-Type", http.DetectContentType(b))
		}
		r.WriteHeader(http.StatusOK)
	}

	if r.chunkedEncoding {
		chunkSize := fmt.Sprintf("%x\r\n", len(b))
		if _, err := r.conn.Write([]byte(chunkSize)); err != nil {
			return 0, err
		}
	}

	n, err := r.conn.Write(b)
	if err != nil {
		return n, err
	}

	if r.chunkedEncoding {
		if _, err := r.conn.Write(nlcf); err != nil {
			return n, err
		}
	}

	return n, nil
}

func (r *responseBodyWriter) Flush() {
	if !r.sentHeaders {
		r.WriteHeader(http.StatusOK)
	}
	if flusher, ok := r.conn.(interface{ Flush() error }); ok {
		flusher.Flush()
	}
}

func (r *responseBodyWriter) flush() error {
	if r.chunkedEncoding {
		if _, err := r.conn.Write([]byte("0\r\n\r\n")); err != nil {
			return err
		}
	}

	r.writeBufferedBody()

	return nil
}

func (r *responseBodyWriter) WriteHeader(statusCode int) {
	if r.sentHeaders {
		slog.Warn(fmt.Sprintf("WriteHeader called twice, second time with: %d", statusCode))
		return
	}

	r.writeHeader(r.conn, r.req.Proto, r.headers, statusCode)
	r.sentHeaders = true
	r.writeBufferedBody()
}

func (r *responseBodyWriter) writeBufferedBody() {
	if r.bodyBuffer != nil {
		_, err := r.conn.Write(r.bodyBuffer.Bytes())
		if err != nil {
			slog.Error("Error writing buffered body", "err", err)
		}
		r.bodyBuffer = nil
	}
}

func (r *responseBodyWriter) writeHeader(conn io.Writer, proto string, headers http.Header, statusCode int) error {
	_, clSet := r.headers["Content-Length"]
	_, teSet := r.headers["Transfer-Encoding"]
	if !clSet && !teSet {
		r.chunkedEncoding = true
		r.headers.Set("Transfer-Encoding", "chunked")
	}

	if r.req.Close {
		r.headers.Set("Connection", "close")
	} else {
		r.headers.Set("Connection", "keep-alive")
	}

	if _, err := io.WriteString(conn, proto); err != nil {
		return err
	}
	if _, err := conn.Write([]byte{' '}); err != nil {
		return err
	}
	if _, err := io.WriteString(conn, strconv.FormatInt(int64(statusCode), 10)); err != nil {
		return err
	}
	if _, err := conn.Write([]byte{' '}); err != nil {
		return err
	}
	if _, err := io.WriteString(conn, http.StatusText(statusCode)); err != nil {
		return err
	}
	if _, err := conn.Write(nlcf); err != nil {
		return err
	}
	for k, vals := range headers {
		for _, val := range vals {
			if _, err := io.WriteString(conn, k); err != nil {
				return err
			}
			if _, err := conn.Write([]byte{':', ' '}); err != nil {
				return err
			}
			if _, err := io.WriteString(conn, val); err != nil {
				return err
			}
			if _, err := conn.Write(nlcf); err != nil {
				return err
			}
		}
	}
	if _, err := conn.Write(nlcf); err != nil {
		return err
	}
	return nil
}

func main() {
	addr := "127.0.0.1:9000"
	mux := http.NewServeMux()
	mux.Handle("/", http.FileServer(http.Dir(".")))
	mux.HandleFunc("/echo", func(w http.ResponseWriter, r *http.Request) {
		defer r.Body.Close()
		b, err := io.ReadAll(r.Body)
		if err != nil {
			w.WriteHeader(400)
			return
		}
		w.Write(b)
	})
	mux.HandleFunc("/echo/chunked", func(w http.ResponseWriter, r *http.Request) {
		defer r.Body.Close()
		io.Copy(w, r.Body)
	})
	mux.HandleFunc("/status/{status}", func(w http.ResponseWriter, r *http.Request) {
		status, err := strconv.ParseInt(r.PathValue("status"), 10, 64)
		if err != nil {
			w.WriteHeader(http.StatusBadRequest)
			io.WriteString(w, fmt.Sprintf("error: %s", err))
			return
		}
		w.WriteHeader(int(status))
	})
	mux.HandleFunc("/headers", func(w http.ResponseWriter, r *http.Request) {
		w.Header().Add("content-type", "application/json")
		json.NewEncoder(w).Encode(r.Header)
	})
	mux.HandleFunc("/nothing", func(w http.ResponseWriter, r *http.Request) {})
	s := Server{
		Addr:    addr,
		Handler: mux,
	}
	log.Printf("Starting web server: http://%s", addr)
	if err := s.ListenAndServe(); err != nil {
		log.Fatal(err)
	}
}