I found it to be compelling (more on that in a moment) and I want to be impacted by them. I want the daily decisions that I make to be subtly influenced by this author and these books.
Related but in a different vein, Nat Ellison has his collection of book notes. Derek Sivers has his. Patrick Collison has a list of books he recommends. I’ve got my own list of recommended books, but I’ve wanted to dive a bit deeper on some of them. So, like Nat Ellison and others, I’m grab-bagging quotes (helpfully brought over from highlights on my Kindle) and some thoughts interspersed between.
I’ve not settled (yet) on a format I like, but as in most things, this is an iterative process. These notes may be useful to others (at least to help them decide if the book is worth reading) but primarily this is a helpful process to me.
I’ve included broad quotes from the book; headings, non-quoted text, bold/italicized text, etc all my addition to help with the skimming, unless I indicate otherwise.
The Case Against Sugar
The dominant views of obesity and weight have problems
Starting this out with a strong shot across the bow, Taubes argues that the “modern” understanding about why we get fat falls into two dominant approaches, and both are catastrophically wrong.
Since the 1930s, to summarize briefly, nutritionists have embraced two ideas that ultimately shaped our judgments about what constitutes a healthy diet. These would be the pillars on which the foundation of nutritional wisdom about the impact of foods — including sugar — on obesity, diabetes, heart disease, and other chronic diseases would be based. They were both products of the state of the science of the era; they were both misconceived, and they would both do enormous damage to our understanding of the diet-disease relationship and, as a result, the public health.
The first idea was that the fat in our diets causes the chronic diseases that tend to kill us prematurely in modern Western societies.
At its simplest, this focus on dietary fat — specifically from butter, eggs, dairy, and fatty meats — emerged from a concept that is now known as a nutrition transition: As populations become more affluent and more urban, more “Westernized” in their eating habits and lifestyle, they experience an increased prevalence of these chronic diseases. Almost invariably, the dietary changes include more fat consumed (and more meat) and fewer carbohydrates…
The second pillar of modern nutritional wisdom is far more fundamental and ultimately has had far more influence on how the science has developed, and it still dominates thinking on the sugar issue. As such, it has also done far more damage. To the sugar industry, it has been the gift that keeps on giving, the ultimate defense against all arguments and evidence that sugar is uniquely toxic. This is the idea that we get obese or overweight because we take in more calories than we expend or excrete.
By this thinking, researchers and public-health authorities think of obesity as a disorder of “energy balance,” a concept that has become so ingrained in conventional thinking, so widespread, that arguments to the contrary have typically been treated as quackery, if not a willful disavowal of the laws of physics.
According to this logic of energy balance, of calories-in/calories-out, the only meaningful way in which the foods we consume have an impact on our body weight and body fat is through their energy content — calories. This is the only variable that matters. We grow fatter because we eat too much — we consume more calories than we expend — and this simple truth was, and still is, considered all that’s necessary to explain obesity and its prevalence in populations.
This thinking renders effectively irrelevant the radically different impact that different macronutrients—the protein, fat, and carbohydrate content of foods—have on metabolism and on the hormones and enzymes that regulate what our bodies do with these foods: whether they’re burned for fuel, used to rebuild tissues and organs, or stored as fat.
By this energy-balance logic, the close association between obesity, diabetes, and heart disease implies no profound revelations to be gleaned about underlying hormonal or metabolic disturbances, but rather that obesity is driven, and diabetes and heart disease are exacerbated, by some combination of gluttony and sloth.
It implies that all these diseases can be prevented, or that our likelihood of contracting them is minimized if individuals - or populations - are willing to eat in moderation and perhaps exercise more, as lean individuals are assumed to do naturally. Despite copious reasons to question this logic and, as we’ll see, an entire European school of clinical research that came to consider it nonsensical, medical and nutrition authorities have tended to treat it as gospel.
This is super exciting, because I’m getting close to being able to glean good insights from DataDog’s Application Performance Monitoring tool.
For a variety of reasons, I want to run DataDog against the app as it is running locally, on my laptop. This will scale up to monitoring all this in production, but for now, I can rapidly experiment, and since we’re not deploying anything (yet) I can freely experiment with gathering/interpreting all this data locally.
The problem with running the app locally is it’s usually running on development mode, which means Rails does lots of stuff to make local development easier, but which makes actual page load time take longer.
Nate covered how to configure your app to run it in a “production-like” environment, locally, but I got tripped up in some of the minor details involved with porting generalized instructions to our specific codebase.
So, today, I’m going to explain how to run the app locally in a way that mimics production. Some of this will be specific to our app, but I some of it could be useful to anyone else with a Rails app
Required changes to config/development.rb
Nate suggested setting these options in development.rb:
# config/environments/development.rbconfig.cache_classes=trueconfig.eager_load=trueconfig.serve_static_files=true# 4.2 or less# config.public_file_server.enabled = true # 5.0 or moreconfig.assets.compile=falseconfig.assets.digest=trueconfig.active_record.migration_error=false
When I write guides to things, I write them first and foremost for myself, and I tend to work through things in excruciating detail. You might find this to be a little too in-depth, or you might appreciate the detail. Either way, if you want a step-by-step guide, this should do it.
Install and configure the latest Datadog Agent. (On macOS, install and run the Trace Agent in addition to the Datadog Agent. See the macOS Trace Agent documentation for more information). APM is enabled by default in Agent 6, however there are additional configurations to be set in a containerized environment including setting apm_non_local_traffic: true. To get an overview of all the possible settings for APM including setting up APM in containerized environments such as Docker or Kubernetes, get started Sending traces to Datadog.
Today, we’ll figure out how to use siege to visit many unique URLs on our page, and to get benchmarks on that process. I’ll next figure out performance profiling in Datadog, and with these three tools put together, we should be ready to make some meaningful improvements to our application.
Siege is an http load testing and benchmarking utility. It was designed to let web developers measure their code under duress, to see how it will stand up to load on the internet. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It lets its user hit a server with a configurable number of simulated clients. Those clients place the server “under siege.”
You can get siege with brew install siege.
I’m using it because it can run a list of URLs you give it. Imagine your app is a store, and it lists a few thousand products. Each product should have a unique URL, something like www.mystore.com/product-name, or maybe www.mystore.com/productguid-product-name. That product-guid makes sure that you can have unique URLs, even if there are two items with the same product name.
Knowing what’s in your database, you can easily concat product-guid and product-name, stick it to the end of www.mystore.com, and come up with a list of a hundred or a thousand or ten thousand unique product URLs in your application. If you saved these to a text file and had Siege visit every single one of those pages as quickly as possible… this might look like some sort of good stress test, huh?
Dumping unique URLs into a text file
You’ll probably start working in a rails console session, to figure out how to access the URL scheme just right.
I fired up the console, and entered:
File.open("all_campaign_urls.txt","w")do|file|# this opens the file in write mode; will over-write contents of file if it existsCampaign.where(account_id: 4887).find_eachdo|campaign|puts"writing "+"http://localhost:3000/account/campaigns/"+campaign.to_paramfile.puts"http://localhost:3000/account/campaigns/"+campaign.to_paramendend
I’ve been slowly working through The Complete Guide to Rails Performance. I’m taking the ideas and concepts from Nate’s book and working on applying the lessons to the app I work on in my day job.
I had a chance to attend Nate’s workshop in Denver a few days ago, as well; while there, we fired up our apps in production-like mode, and used wrk, a HTTP benchmarking tool, to see how many pages our app could serve in a given amount of time. (wrk docs).
You can use it very similarly to wrk - give it a thread count, connection count, duration, and address, and it’ll hammer that page and serve up all sorts of good results.
The “rule of thumb” for benchmarking “protected” pages is:
Whatever page you can access locally in an incognito browser is what your benchmarking tool can hit without any special authentication.
In other words, when I visit http://localhost:3000/ locally, I get redirected to http://localhost:3000/users/sign_in. This is fine for apache bench, if I want to test how quickly our sign-in page loads. I can run: