At Haaartland we want to deliver a product that is as Eco-friendly as possible. To achieve this we needed to think about how we write our software in the best possible way and how we deploy it.

The EPA reports that approximately 29% of carbon emissions in the United States are tied to electricity production. That’s more than transportation, which clocks in at around 27%. So while we are all dreaming of a technological future with bitcoin and futuristic cars, we need to pause and think about the fact that we’re not thinking about impact nearly as much as we should.

This post will focus on the software, which is the part that is under our control. The hardware part is equally important but that comes down to selecting the right sustainable service to run our software. We run our product on Amazon AWS since they aim for a climate-neutral service. See their post on sustainability

Efficient software is Eco-friendly software. In the end, all software consumes resources such as CPU, memory, storage, and network bandwidth. Being Eco-friendly means that we need to build our software to be as efficient as possible and consume only the resources that we absolutely need to consume to get the product we want.

Most developers today are far removed from the fundamentals and efficient coding is getting rarer and rarer. To be Eco-friendly we need to start teaching and learning the fundamentals again.

What really happens when you go from working with a dataset of 10 elements to 10000, a million? or a billion?

Big-O is a good starting point to learn and become aware of the fundamentals.

The Big-O notation is a tool we use for describing the complexity of an algorithm. Big-O expresses the best, worst and average case for running an algorithm from time(CPU cycles) and space(memory) perspectives.

So... let us dig into the basics of Big-O

The code always needs the same time, no matter how large the input dataset is. For instance, accessing a single element in an array of n items is an O(1) operation.

**Example:**

```
func logAccountBalance(acc *account) {
tf := acc.Timestamp.Format(time.RFC3339)
fmt.Printf("%s|%d|%s|%f", tf, acc.Id, acc.Curr, acc.Bal)
}
```

This means that the run time increases at the same pace as the input dataset. The most common linear-time operations are running through the entire dataset, like a file from start to finish.

**Example:**

```
func logAccountBalances(accs []*account) {
for _, acc := range accs {
logAccountBalance(acc)
}
}
```

The calculation runs in quadratic time, which is the squared size of the input dataset. Many of the basic sorting algorithms have a worst-case run time of O(n²) like Bubble Sort, Insertion Sort, and Selection Sort.

**Example:**

```
func logCustomerAccountBalances(custs []*customer) {
for _, cust := range custs {
for _, acc := range cust.Accounts {
logAccountBalance(acc)
}
}
}
```

The running time grows in proportion to the logarithm of the input dataset size, meaning that the run time barely increases as you exponentially increase the input. Inserting or retrieving elements into or from a balanced binary tree is an example of this, or performing a binary search on a sorted array.

**Example:**

```
func findAccountById(id int, sortedAccounts []*account) *account {
// sort.Search uses binary search to find and
// return the smallest index i in [0, n)
// at which f(i) is true
i := sort.Search(len(sortedAccounts),
func(i int) bool { return sortedAccounts[i].Id >= id },
)
if i < len(sortedAccounts) && sortedAccounts[i].Id == id {
return sortedAccounts[i]
} else {
return nil
}
}
```

The algorithm you choose to solve a problem has a great impact on efficiency. Developers need to look beyond the libraries and get an understanding of the fundamentals. Like in my last example, I'm using the `sort.Search `

function from the library and I should know the impact of using it. Can you think of a more efficient way of getting the account with a given id? I can... perhaps use another data structure?

*Note: I only scratched the surface in this post and I did not cover **O(2^n) **for instance. If you are interested to learn more **Check out this post**. In the article, the author goes into improving the processing of a given problem to solve it more efficiently. Perhaps you can even move the problem from one **O** class to another faster class?*

**O**h my god 🤠 That is all for today. I hope your head doesn't hurt! **Would you like to see more of this kind of post?**