The standard library's JSON encoder is slow. And JSON is not efficient. BSON is where JSON should have been, at least eliminating base64 encoding to transfer bytes.

The non-standard library JSON encoders either require code generation (which you might as well use proto as it encodes better) or can't support the full feature set.

I was being lazy recently and I decided to use JSON for encoding an internal frame over a Unix socket. When I compared to gRPC I was being destroyed in packets per second. Switching to a proto fixed this.

I thought it might be base64 encoding of the []byte content. That added a small bit of overhead. Frankly the built in encoder is just slow.

I think the benchmarks speak for themselves. I didn't even bother benmarking the decodes, the encoding was enough to convince me.

  • *WithStr means the data is just a string
  • -*WithBytes means the data is a []byte, which in JSON gets base64 encoded
BenchmarkJSONEncodingWithStr/_10kiBStr-16         	  119774	      9748 ns/op	   10914 B/op	       2 allocs/op
BenchmarkJSONEncodingWithStr/_100kiBStr-16        	   13214	     91671 ns/op	  107140 B/op	       2 allocs/op
BenchmarkJSONEncodingWithStr/_1miBStr-16          	    1292	    875316 ns/op	 1080500 B/op	       2 allocs/op
BenchmarkJSONEncodingWithBytes/_10kiBBytes-16     	  100412	     11924 ns/op	   15562 B/op	       3 allocs/op
BenchmarkJSONEncodingWithBytes/_100kiBBytes-16    	   10000	    110331 ns/op	  143101 B/op	       3 allocs/op
BenchmarkJSONEncodingWithBytes/_1miBBytes-16      	    1125	   1053090 ns/op	 1674998 B/op	       4 allocs/op
BenchmarkProtoEncodingWithBytes/_10kiBBytes-16    	  718411	      1440 ns/op	   10880 B/op	       1 allocs/op
BenchmarkProtoEncodingWithBytes/_100kiBBytes-16   	   95144	     12543 ns/op	  106496 B/op	       1 allocs/op
BenchmarkProtoEncodingWithBytes/_1miBBytes-16     	    8842	    126329 ns/op	 1032192 B/op	       1 allocs/op