Monday, July 31, 2023

Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory

In this article, you'll learn high level steps to configure your Azure API Management instance to protect an API, by using the OAuth 2.0 protocol with Azure Active Directory (Azure AD).

For a conceptual overview of API authorization, see Authentication and authorization to APIs in API Management.

Prerequisites

Prior to following the steps in this article, you must have:

  • An API Management instance
  • A published API using the API Management instance
  • An Azure AD tenant

Overview

Follow these steps to protect an API in API Management, using OAuth 2.0 authorization with Azure AD.

  1. Register an application (called backend-app in this article) in Azure AD to protect access to the API.

    To access the API, users or applications will acquire and present a valid OAuth token granting access to this app with each API request.

  2. Configure the validate-jwt policy in API Management to validate the OAuth token presented in each incoming API request. Valid requests can be passed to the API.

Details about OAuth authorization flows and how to generate the required OAuth tokens are beyond the scope of this article. Typically, a separate client app is used to acquire tokens from Azure AD that authorize access to the API. For links to more information, see the Next steps.

Register an application in Azure AD to represent the API

Using the Azure portal, protect an API with Azure AD by first registering an application that represents the API.

For details about app registration, see Quickstart: Configure an application to expose a web API.

  1. In the Azure portal, search for and select App registrations.

  2. Select New registration.

  3. When the Register an application page appears, enter your application's registration information:

    • In the Name section, enter a meaningful application name that will be displayed to users of the app, such as backend-app.
    • In the Supported account types section, select an option that suits your scenario.
  4. Leave the Redirect URI section empty.

  5. Select Register to create the application.

  6. On the app Overview page, find the Application (client) ID value and record it for later.

  7. Under the Manage section of the side menu, select Expose an API and set the Application ID URI with the default value. If you're developing a separate client app to obtain OAuth 2.0 tokens for access to the backend-app, record this value for later.

  8. Select the Add a scope button to display the Add a scope page:

    1. Enter a new Scope nameAdmin consent display name, and Admin consent description.
    2. Make sure the Enabled scope state is selected.
  9. Select the Add scope button to create the scope.

  10. Repeat the previous two steps to add all scopes supported by your API.

  11. Once the scopes are created, make a note of them for use later.

Configure a JWT validation policy to pre-authorize requests

The following example policy, when added to the <inbound> policy section, checks the value of the audience claim in an access token obtained from Azure AD that is presented in the Authorization header. It returns an error message if the token is not valid. Configure this policy at a policy scope that's appropriate for your scenario.

  • In the openid-config URL, the aad-tenant is the tenant ID in Azure AD. Find this value in the Azure portal, for example, on the Overview page of your Azure AD resource. The example shown assumes a single-tenant Azure AD app and a v2 configuration endpoint.
  • The value of the claim is the client ID of the backend-app you registered in Azure AD.
XML
<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
    <openid-config url="https://login.microsoftonline.com/{aad-tenant}/v2.0/.well-known/openid-configuration" />
    <audiences>
        <audience>{audience-value - (ex:api://guid)}</audience>
    </audiences>
    <issuers>
        <issuer>{issuer-value - (ex: https://sts.windows.net/{tenant id}/)}</issuer>
    </issuers>
    <required-claims>
        <claim name="aud">
            <value>{backend-app-client-id}</value>
        </claim>
    </required-claims>
</validate-jwt>

 Note

The preceding openid-config URL corresponds to the v2 endpoint. For the v1 openid-config endpoint, use https://login.microsoftonline.com/{aad-tenant}/.well-known/openid-configuration.

For information on how to configure policies, see Set or edit policies. Refer to the validate-jwt reference for more customization on JWT validations. To validate a JWT that was provided by the Azure Active Directory service, API Management also provides the validate-azure-ad-token policy.

Authorization workflow

  1. A user or application acquires a token from Azure AD with permissions that grant access to the backend-app.

  2. The token is added in the Authorization header of API requests to API Management.

  3. API Management validates the token by using the validate-jwt policy.

    • If a request doesn't have a valid token, API Management blocks it.

    • If a request is accompanied by a valid token, the gateway can forward the request to the API.

Securing APIs: 10 Best Practices for Keeping Your Data and Infrastructure Safe

Interesting points:

overview and some quick summary of differnt security protection methodologies used. Not given complete details of how to implement but quite simpel way to understand api security. 

Introduction

In part one of this two-part series, we explained what web APIs are and how they work. In this article, we look at how APIs can pose risks to your data and infrastructure—and what you can do to secure them.

In part one, we learned that web APIs (application programming interfaces) provide a way for app developers to “call” information from outside sources into the applications they build. The example we gave was a travel app, which uses web API calls to pull in availability and pricing information from various hotel, airline, cruise line, tour, car rental, and other companies. APIs benefits app developers by simplifying the coding process and granting them access to a wealth of data and resources they would not otherwise be able to access. APIs also benefit providers, who are able to create new revenue streams by making valuable data and services available to developers, usually for a fee. And ultimately, APIs benefit consumers, who appreciate (and drive demand for) innovative, feature-rich, interactive apps that provide many services all in one app.

Understanding the Potential Risks of APIs

The downside of publicly available web APIs is that they can potentially pose great risk to API providers. By design, APIs give outsiders access to your data: behind every API, there is an endpoint—the server (and its supporting databases) that responds to API requests (see Figure 1). In terms of potential vulnerabilityA vulnerability is an inherent weakness in a system (hardware or software) that an attacker can potentially exploit. Vulnerabilities exist in every system; “zero-day” vulnerabilities are those that have not yet been discovered., an API endpoint is similar to any Internet-facing web server; the more free and open access the public has to a resource, the greater the potential threat from malicious actors. The difference is that many websites at least employ some type of access control, requiring authorized users to log in. One problem with some APIs, as we’ll see shortly, is that they provide weak access control and, in some cases, none at all. With APIs becoming foundational to modern app development, the attack surfaceAttack surface refers to all entry points through which an attacker could potentially gain unauthorized access to a network or system to extract or enter data or to carry out other malicious activities. is continually increasing. Gartner estimates that “by 2022, API abuses will move from infrequent to the most frequent attack vectorThe path available and means by which an attacker can gain unauthorized access to a network, system, program, application, or device for malicious purposes., resulting in data breaches for enterprise web applications.”1

Web APIs connect to an endpoint: the web server and supporting databases
Figure 1: Web APIs connect to an endpoint: the location of the web server and supporting databases

In worst case, it’s not just your data that is potentially at risk but also your infrastructure. By exploiting a vulnerable API, attackers can gain access to your network using one kind of attack. If they’re able to escalate privileges, they can then pivot to other types of attacks and gain a foothold in the network. The right attack—often a multi-level attack—could potentially lead to your organization’s most sensitive data being compromised, whether it’s personally identifiable information (PII) or intellectual property (IP).

No matter what the attack vector, a data breach is a data breach: it can damage your company’s brand and reputation and could result in significant fines and lost revenue. No organization is immune; some of the largest and well-known companies—Facebook,2, 3 Google,4 Equifax,5 Instagram,6, 7 T-Mobile,8 Panera Bread,9 Uber,10 Verizon,11 and others—have suffered significant data breaches as a result of API attacks. It’s imperative for all companies, not just large ones, to secure all APIs, particularly those that are publicly available.

Common Attacks Against Web APIs

APIs are susceptible to many of the same kinds of attacks defenders have been fighting in their networks and web-based apps for years. None of the following attacks are new but can easily be used against APIs.

  • Injection occurs when an attacker is able to insert malicious code or commands into a program, usually where ordinary user input (such as a username or password) is expected. SQL injection is a specific type of injection attack, enabling an attacker to gain control of an SQL database.
  • Cross-site scripting (XSS) is a type of injection attack that occurs when a vulnerability enables an attacker to insert a malicious script (often JavaScript) into the code of a web app or webpage.
  • Distributed denial-of-service (DDoS) attacks make a network, system, or website unavailable to intended users, typically by flooding it with more traffic than it can handle. API endpoints are among the growing list of DDoS targets.
  • Man-in-the-middle (MitM) attacks occur when an attacker intercepts traffic between two communicating systems and impersonates each to the other, acting as an invisible proxy between the two. With APIs, MitM attacks can occur between the client (app) and the API, or between the API and its endpoint.
  • Credential stuffing is the use stolen credentials on API authentication endpoints to gain unauthorized access.

Briefly, Table 1 matches attack types to traditional mitigations:
 

API Attack Types and Mitigations
Attack TypeMitigations
InjectionValidate and sanitize all data in API requests; limit response data to avoid unintentionally leaking sensitive data
Cross-Site Scripting (XSS)Validate input; use character escaping and filtering
Distributed Denial-of-Service (DDoS)Use rate limiting and limit payload size
Man-in-the-Middle (MitM)Encrypt traffic in transit
Credential StuffingUse an intelligence feed to identify credential stuffing and implement rate limits to control brute force attacks

Table 1. Common attack types that can be used against APIs matched to corresponding mitigations

Best Practices for Securing APIs

In addition to employing the mitigations outlined in Table 1, it’s critical that organizations adhere to some basic security best practices and employ well-established security controls if they intend to share their APIs publicly.

  • Prioritize security. API security shouldn’t be an afterthought or considered “someone else’s problem.” Organizations have a lot to lose with unsecured APIs, so make security a priority and build it into your APIs as they’re being developed.
  • Inventory and manage your APIs. Whether an organization has a dozen or hundreds of publicly available APIs, it must first be aware of them in order to secure and manage them. Surprisingly, many are not. Conduct perimeter scans to discover and inventory your APIs, and then work with DevOps teams to manage them.
  • Use a strong authentication and authorization solution. Poor or non-existent authentication and authorization are major issues with many publicly available APIs. Broken authentication occurs when APIs do not enforce authentication (as is often the case with private APIs, which are meant for internal use only) or when an authentication factor (something the client knows, has, or is) can be broken into easily. Since APIs provide an entry point to an organization’s databases, it’s critical that the organization strictly controls access to them. When feasible, use solutions based on solid, proven authentication and authorization mechanisms such as OAuth2.0 and OpenID Connect.
  • Practice the principle of least privilege. This foundational security principle holds that subjects (users, processes, programs, systems, devices) be granted only the minimum necessary access to complete a stated function. It should be applied equally to APIs.
  • Encrypt traffic using TLS. Some organizations may choose not to encrypt API payload data that is considered non-sensitive (for example, weather service data), but for organizations whose APIs routinely exchange sensitive data (such as login credentials, credit card, social security, banking information, health information), TLS encryption should be considered essential.
  • Remove information that’s not meant to be shared. Because APIs are essentially a developer’s tool, they often contain keys, passwords, and other information that should be removed before they’re made publicly available. But sometimes this step is overlooked. Organizations should incorporate scanning tools into their DevSecOps processes to limit accidental exposure of secret information.
  • Don’t expose more data than necessary. Some APIs reveal far too much information, whether it’s the volume of extraneous data that’s returned through the API or information that reveals too much about the API endpoint. This typically occurs when an API leaves the task of filtering data to the user interface instead of the endpoint. Ensure that APIs only return as much information as is necessary to fulfill their function. In addition, enforce data access controls at the API level, monitor data, and obfuscate if the response contains confidential data.
  • Validate input. Never pass input from an API through to the endpoint without validating it first.
  • Use rate limiting. Setting a threshold above which subsequent requests will be rejected (for example, 10,000 requests per day per account) can prevent denial-of-service attacks.
  • Use a web application firewall. Ensure that it is able to understand API payloads.

Conclusion

APIs have arguably become the preferred method for building modern applications, especially for mobile and Internet of Things (IoT) devices. And while the concept of pulling information into a program from an outside source is not a new one, constantly evolving app development methods and the pressure to innovate means some organizations may not yet have grasped the potential risks involved in making their APIs publicly available. The good news is that there’s no great mystery involved in securing them. Most organizations already have measures in place to combat well-known attacks like cross-site scripting, injection, distributed denial-of-service, and others that can target APIs. And many of the best practices mentioned above are likely quite familiar to seasoned security professionals. If you’re not sure where to begin, start at the top of the list and work your way down. No matter how many APIs your organization chooses to share publicly, your ultimate goal should be to establish solid API security policies and manage them proactively over time.

Backends for Frontends Pattern AWS

 Ref:https://aws.amazon.com/blogs/mobile/backends-for-frontends-pattern/


Interesting points and differnt approch to design BFF:


AWS AppSync is a fully managed service that makes it easy to develop GraphQL  In this article it explains how to  bring API  functionality thorough BFF to client . Architecture is completely based on events driven which makes more interesting from integration point of view. 


In this blog post, we describe how you can improve end-user customer experience on your User Interfaces (UI) by implementing the Backend for Frontend pattern and providing real-time visual updates when your microservices raise events about mutations in their domain aggregates.

The solution proposed combines two patterns: 1) the Backends for Frontends (BFF) pattern, where applications have one backend per user experience, instead of having only one general-purpose API backend; and 2) the Publisher–Subscriber (pub/sub) pattern, where microservices announce events to multiple interested consumers asynchronously, without coupling the senders to the receivers. When combined, these two patterns allow frontend clients to load UI-ready data projections and to refresh the UI with event-driven notifications, resulting in a high-performant near-real-time experience for end-users.

To explain the solution to both REST and GraphQL API developers, we provide two similar architecture diagrams addressing each API technology.

The BFF pattern

According to Sam Newman, the Backend For Frontend (BFF) pattern refers to having one backend per user experience, instead of having only one general-purpose API backend.

Traditionally, the approach to accommodating more than one type of UI is to provide a single, server-side API, and add more functionality as required over time to support new types of mobile interaction. The following challenges may result from this approach:

  1. Mobile devices make fewer calls, and want to display different (and probably less) data than their desktop counterparts – this means that API backends need additional functionality to support mobile interfaces.
  2. Modern application UIs are increasingly adopting reactive strategies to provide real-time feedback to end-users (for example via WebSockets), and different devices may implement different technology stacks to support it.
  3. API backends are, by definition, providing functionality to multiple, user-facing applications – this means that the single API backend can become a bottleneck when rolling out new delivery, due to the many changes being made to the same deployable artifact.

To address these challenges, Sam suggests that you should think of the user-facing application as being two components: a client-side application living outside your perimeter, and a server-side component (the BFF) inside your perimeter. According to him, the BFF is tightly coupled to a specific user experience, and will typically be maintained by the same team as the user interface, thereby making it easier to define and adapt the API as the UI requires, while also simplifying the process of lining up releases of both the client and server components.

Following Sam’s suggestion, the BFF pattern has been adopted by companies like Netflix, where their Android team seamlessly swapped the API backend of the Netflix Android app, enabling them to work with their endpoint with increased level of scrutiny, observability, and integration with Netflix’s microservice ecosystem.

Components of an event-driven BFF on AWS

Assuming that you already have an event-driven architecture in place, regardless of having events generated by multiple microservices or by a single monolith, then you already have enough to start building your decoupled event-driven Backends-for-Frontends (BFFs) for each of your end-user experiences.

The diagram below presents a high-level view of the architecture and its message flow. Representing the domain publishers on the right, each with its own domain-specific aggregate database; and the BFF subscribers on the left, each with its own user-experience-specific projection database. In the middle, there’s an event bus propagating domain state changes, allowing publishers and subscribers to remain decoupled.

Figure 1. Message flow diagram.

Figure 1. Message flow diagram.

The event-driven BFF solutions described in this blog post rely on a technology-specific API, in addition to the following common components:

  • NoSQL database to store domain projection tables, which also supports Change Data Capture (CDC). Here, we use Amazon DynamoDB – a fast, flexible NoSQL database service for single-digit millisecond performance at any scale.
  • compute-layer to process requests and to integrate the CDC stream with the API. Here, we use AWS Lambda – a serverless, event-driven compute service that lets you run code without thinking about servers or clusters.
  • An authentication and authorization mechanism to protect the API. Here, we use Amazon Cognito – a simple and secure service for user sign-up, sign-in, and access control.

Building an event-driven REST BFF using API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications.

The following architecture diagram describes, using Domain-Driven Design (DDD) concepts, how to leverage API Gateway WebSocket APIs to create an event-driven UI for end-users. For a PDF version of this diagram, see this link.

Figure 2. Diagram of RESTful BFF using API Gateway.

Figure 2. Diagram of RESTful BFF using API Gateway.

Implementation steps:

  1. Catch the events from your application with purpose-built BFF event consumers. These are responsible for updating denormalized data projections in Amazon DynamoDB for frontend consumption.
  2. On UI load, frontend clients authenticate with Amazon Cognito, then query the data by invoking the BFF API built with Amazon API Gateway. The data is then fetched in DynamoDB, either directly by API Gateway or via a BFF query handler built with AWS Lambda.
  3. Frontend clients subscribe for any subsequent data changes by connecting to a BFF WebSocket endpoint provided by API Gateway, which triggers the update of the “connected clients” table.
  4. Continue to consume and process all relevant events from your application using the BFF event consumers. These consumers continuously update the denormalized frontend data view in the BFF database in real time.
  5. Subscribe to all events resulting from data changes in the BFF database using Amazon DynamoDB Streams, then register a trigger in AWS Lambda to asynchronously invoke a BFF stream-handler Lambda function when it detects new stream records.
  6. Your BFF stream handler then pushes notifications to clients connected to API Gateway’s WebSockets.
  7. When the change notification from API Gateway is received by the frontend clients, they can refresh the UI content.

Building an event-driven GraphQL BFF using AppSync

We see organizations increasingly choosing to build APIs with GraphQL because it helps them develop applications faster, by giving frontend developers the ability to query multiple databases, microservices, and APIs with a single GraphQL endpoint.

AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like Amazon DynamoDBAWS Lambda, and more. Once deployed, AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes. Additionally, AppSync adds caches to improve performance, supports client-side data stores that keep off-line clients in sync, and supports real-time updates via transparent subscriptions over WebSockets.

The following architecture diagram describes, using Domain-Driven Design (DDD) concepts, how to leverage AppSync subscriptions to create an event-driven UI for end-users. For a PDF version of this diagram, see this link.

Figure 3. Diagram of GraphQL BFF using AppSync.

Figure 3. Diagram of GraphQL BFF using AppSync.

Implementation steps:

  1. Catch the events from your application with purpose-built BFF event consumers. These are responsible for keeping a denormalized view of data in Amazon DynamoDB for frontend consumption.
  2. On UI load, frontend clients authenticate with Amazon Cognito, then query the data with GraphQL by invoking the BFF API built with AWS AppSync. The data is then fetched in DynamoDB, either directly by AWS AppSync or via a BFF query handler built with AWS Lambda.
  3. Frontend clients subscribe for any subsequent data changes using AWS AppSync subscriptions over WebSockets.
  4. Continue to consume and process all relevant events from your application using the BFF event consumers. These consumers continuously update the denormalized frontend data view in the BFF database in real-time.
  5. Subscribe to all events resulting from data changes in the BFF database using Amazon DynamoDB Streams, then register a trigger in AWS Lambda to asynchronously invoke a BFF stream-handler Lambda function when it detects new stream records.
  6. Your BFF stream handler then invokes an empty mutation on the AWS AppSync GraphQL schema, purposely created to force the subscription to be triggered, thus sending a notification to clients.
  7. When the change notification from AWS AppSync is received by the frontend clients, they can refresh the UI content.
    AWS AppSync provides a simplified WebSockets experience with real-time data, connections, scalability, fan-out and broadcasting which are all handled by the AWS AppSync service, allowing developers to focus on the application use cases and requirements instead of dealing with the complexity of infrastructure required to manage WebSockets connections at scale.

The AWS Lambda stream handler function in step 6 is triggered by new items being inserted into the Amazon DynamoDB table. It reads these items and updates AWS AppSync, alerting it to the new data. A sample of this code, written in node.js, is available here on GitHub as part of an AWS Sample for deploying AWS AppSync in a multi region setup. The primary functions were extracted and are explained below.

The code below shows the entry point to the Lambda function. It receives an event from DynamoDB, triggered by new data in the stream. For each new entry, if it is data that has been inserted (rather than updated or deleted), it will parse the data and call the executeMutation function.

exports.handler = async(event) => {

  for (let record of event.Records) {
    switch (record.eventName) {
    
      case 'INSERT':
        // Grab the data we need from stream...
        let id = record.dynamodb.Keys.id.S;
        let name = record.dynamodb.NewImage.item.S;
        // ... and then execute the publish mutation
        await executeMutation(id, name);
        break;
        
      default:
        break;
    }
  }
  
  return { message: `Finished processing ${event.Records.length} records` }
}

The code below is the executeMutation function that mutates AWS AppSync with the new data received from the DynamoDB stream. Using the third-party library Axios (a promise-based HTTP client for node.js) it connects to the AppSync API endpoint and posts the mutation.

const executeMutation = async(id, name) => {

  const mutation = {
    query: print(publishItem),
    variables: {
      name: name,
      id: id,
    },
  };

  try {
  
    let response = await axios({
      url: process.env.AppSyncAPIEndpoint,
      method: 'post',
      headers: {
        'x-api-key': process.env.AppSyncAPIKey
      },
      data: JSON.stringify(mutation)
    });
    
  } catch (error) {
    throw error;
  }
};

The mutation is generated using the following GQL code:

const publishItem = gql`
  mutation PublishMutation($name: String!, $id: ID!) {
    publishItemsModel(input: {id: $id, item: $name}) {
      id
      item
    }
  }
`

Conclusion

The BFF pattern refers to having one backend per user experience, instead of having only one general-purpose API backend.

By implementing the BFF pattern in an event-drive architecture, you can improve end-user customer experience on your UI by providing near-real-time visual updates when your microservices raise events about mutations in domain aggregates.

In this blog post, we explained how developers can apply the BFF pattern to their REST and GraphQL APIs to load UI-ready data projections and refresh the UI with event-driven notifications.

You can download the detailed reference architectures in PDF for Amazon API Gateway here and AWS AppSync here.

React Profiler: A Step by step guide to measuring app performance

  As react applications are becoming more and more complex, measuring application performance becomes a necessary task for developers. React...