A key difference between "[[Serverless MOC|Serverless]]" cloud resources and non-serverless ones are that the former are provisioned on-demand and will only consume resources (CPU, network, memory, storage, etc) when in active use. In addition to the cost savings provided to customers, this also has an important environmental impact as it allows cloud providers such as [[AWS]] to be significantly more efficient in the provisioning of their services. ## Example use case The simplest example within [[AWS]] is that of a company building a REST API. With a traditional server-based architecture, this would require the provisioning of an [[AWS EC2|EC2]] instance which would be always-on (24/7/365), even when the API is getting no traffic. If the API needs to handle occasional high bursts of traffic, then the instance size would need to be sufficiently large to handle this. Also, in order to achieve high-availability, at least 2 of these instances would need to be running at the same time. Contrast this to a functionally-equivalent API built using a serverless architecture. Using [[AWS API Gateway|API Gateway]] and [[AWS Lambda|Lambda]] functions, cloud resources would only be used whenever requests are made to the API. The rest of the time while users are inactive, no cloud resources are being consumed. ## Greenhouse Gas Protocol and AWS customers > The [Greenhouse Gas Protocol](https://ghgprotocol.org/) organizes carbon emissions into the following scopes, along with relevant emission examples within each scope for a cloud provider such as AWS: > - **Scope 1:** All direct emissions from the activities of an organization or under its control. For example, fuel combustion by data center backup generators.  > - **Scope 2:** Indirect emissions from electricity purchased and used to power data centers and other facilities. For example, emissions from commercial power generation.  > - **Scope 3:** All other indirect emissions from activities of an organization from sources it doesn’t control. AWS examples include emissions related to data center construction, and the manufacture and transportation of IT hardware deployed in data centers.  > From an AWS customer perspective, **emissions from your workloads running on AWS are accounted for as indirect emissions, and part of your Scope 3 emissions**. Each workload deployed generates a fraction of the total AWS emissions from each of the previous scopes. The actual amount varies per workload and depends on several factors including the AWS services used, the energy consumed by those services, the carbon intensity of the electric grids serving the AWS data centers where they run, and the AWS procurement of renewable energy. > Customers are responsible for sustainability _in_ the cloud – optimizing workloads and resource utilization, and minimizing the total resources required to be deployed for your workloads. On this last point, this is where [[Serverless MOC|Serverless]] shines. ## #OpenQuestions Areas to research further: - Do cloud providers like [[AWS]] provide any dashboard metrics (e.g. via [[AWS CloudWatch|CloudWatch]]), that directly map cloud account resource usage to environmental impact metrics? --- ## References - [AWS Sustainability Pillar](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html) - one of the six pillars which make up the AWS [[Well-Architected Framework]]: > "The sustainability pillar includes the ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required." - [How Google Cloud and AWS Approach Customer Carbon Emissions]()https://www.lastweekinaws.com/blog/How-Google-Cloud-and-AWS-Approach-Customer-Carbon-Emissions/?ck_subscriber_id=512833200 by [[@Corey Quinn]]