AWS recognized as an Overall Leader in 2024 KuppingerCole Leadership Compass for Policy Based Access Management

Figure 1: KuppingerCole Leadership Compass for Policy Based Access Management

The report helps organizations learn about policy-based access management solutions for common use cases and requirements. KuppingerCole defines policy-based access management as an approach that helps to centralize policy management, run authorization decisions across a variety of applications and resource types, continually evaluate authorization decisions, and support corporate governance.

Policy-based access management has three major benefits: consistency, security, and agility. Many organizations grapple with a patchwork of access control mechanisms, which can hinder their ability to implement a consistent approach across the organization, increase their security risk exposure, and reduce the agility of their development teams. A policy-based access control architecture helps organizations centralize their policies in a policy store outside the application codebase, where the policies can be audited and consistently evaluated. This enables teams to build, refactor, and expand applications faster, because policy guardrails are in place and access management is externalized.

Want more AWS Security news? Follow us on Twitter.

Julian is a Principal Product Manager at AWS, with over 20 years’ experience in the field of Identity and Access Management. He leads the product team for Amazon Verified Permissions, and works closely with customers, partners, and the internal teams building out the service and the underlying Cedar language. He’s based in Northern California, where he enjoys mountain biking and the idea of camping.

source

Modern web application authentication and authorization with Amazon VPC Lattice

When building API-based web applications in the cloud, there are two main types of communication flow in which identity is an integral consideration:

To design an authentication and authorization solution for these flows, you need to add an extra dimension to each flow:

In each flow, a user or a service must present some kind of credential to the application service so that it can determine whether the flow should be permitted. The credentials are often accompanied with other metadata that can then be used to make further access control decisions.

If your application already has client authentication, such as a web application using OpenID Connect (OIDC), you can still use the sample code to see how implementation of secure service-to-service flows can be implemented with VPC Lattice.

For a web application, particularly those that are API based and comprised of multiple components, VPC Lattice is a great fit. With VPC Lattice, you can use native AWS identity features for credential distribution and access control, without the operational overhead that many application security solutions require.

For this example, the web application is constructed from multiple API endpoints. These are typical REST APIs, which provide API connectivity to various application components.

VPC Lattice doesn’t support OAuth2 client or inspection functionality, however it can verify HTTP header contents. This means you can use header matching within a VPC Lattice service policy to grant access to a VPC Lattice service only if the correct header is included. By generating the header based on validation occurring prior to entering the service network, we can use context about the user at the service network or service to make access control decisions.

Figure 1: User-to-service flow

The solution uses Envoy, to terminate the HTTP request from an OAuth 2.0 client. This is shown in Figure 1: User-to-service flow.

Figure 2: JWT Scope to HTTP headers

By adding an authorization policy that permits access only from Envoy (through validating the Envoy SigV4 signature) and only with the correct scopes provided in HTTP headers, you can effectively lock down a VPC Lattice service to specific verified users coming from Envoy who are presenting specific OAuth2 scopes in their bearer token.

To answer the original question of where the identity comes from, the identity is provided by the user when communicating with their identity provider (IdP). In addition to this, Envoy is presenting its own identity from its underlying compute to enter the VPC Lattice service network. From a configuration perspective this means your user-to-service communication flow doesn’t require understanding of the user, or the storage of user or machine credentials.

The sample code provided shows a full Envoy configuration for VPC Lattice, including SigV4 signing, access token validation, and extraction of JWT contents to headers. This reference architecture supports various clients including server-side web applications, thick Java clients, and even command line interface-based clients calling the APIs directly. I don’t cover OAuth clients in detail in this post, however the optional sample code allows you to use an OAuth client and flow to talk to the APIs through Envoy.

In the service-to-service flow, you need a way to provide AWS credentials to your applications and configure them to use SigV4 to sign their HTTP requests to the destination VPC Lattice services. Your application components can have their own identities (IAM roles), which allows you to uniquely identify application components and make access control decisions based on the particular flow required. For example, application component 1 might need to communicate with application component 2, but not application component 3.

Figure 3: Service-to-service flow

This design uses a service network auth policy that permits access to the service network by specific IAM principals. This can be used as a guardrail to provide overall access control over the service network and underlying services. Removal of an individual service auth policy will still enforce the service network policy first, so you can have confidence that you can identify sources of network traffic into the service network and block traffic that doesn’t come from a previously defined AWS principal.

The preceding auth policy example grants permissions to any authenticated request that uses one of the IAM roles app1TaskRole, app2TaskRole, app3TaskRole or EnvoyFrontendTaskRole to make requests to the services attached to the service network. You will see in the next section how service auth policies can be used in conjunction with service network auth policies.

Individual VPC Lattice services can have their own policies defined and implemented independently of the service network policy. This design uses a service policy to demonstrate both user-to-service and service-to-service access control.

The preceding auth policy is an example that could be attached to the app1 VPC Lattice service. The policy contains two statements:

As with a standard IAM policy, there is an implicit deny, meaning no other principals will be permitted access.

The caller principals are identified by VPC Lattice through the SigV4 signing process. This means by using the identities provisioned to the underlying compute the network flow can be associated with a service identity, which can then be authorized by VPC Lattice service access policies.

This model of access control supports a distributed development and operational model. Because the service network auth policy is decoupled from the service auth policies, the service auth policies can be iterated upon by a development team without impacting the overall policy controls set by an operations team for the entire service network.

Figure 4: CDK deployable solution

The AWS CDK solution deploys four Amazon ECS services, one for the frontend Envoy server for the client-to-service flow, and the remaining three for the backend application components. Figure 4 shows the solution when deployed with the internal domain parameter application.internal.

Backend application components are a simple node.js express server, which will print the contents of your request in JSON format and perform service-to-service calls.

A number of other infrastructure components are deployed to support the solution:

The code for Envoy and the application components can be found in the lattice_soln/containers directory.

AWS CDK code for all other deployable infrastructure can be found in lattice_soln/lattice_soln_stack.py.

Before you begin, you must have the following prerequisites in place:

This solution has been tested using Okta, however any OAuth compatible provider will work if it can issue access tokens and you can retrieve them from the command line.

The following instructions describe the configuration process for Okta using the Okta web UI. This allows you to use the device code flow to retrieve access tokens, which can then be validated by the Envoy frontend deployment.

During the API Integration step, you should have collected the audience, JWKS URI, and issuer. These fields are used on the command line when installing the CDK project with OAuth support.

If you only want to deploy the solution with service-to-service flows, you can deploy with a CDK command similar to the following:

To deploy the solution with OAuth functionality, you must provide the following parameters:

The solution can be deployed with a CDK command as follows:

For this solution, network access to the web application is secured through two main controls:

The Envoy configuration strips any x- headers coming from user clients and replaces them with x-jwt-subject and x-jwt-scope headers based on successful JWT validation. You are then able to match these x-jwt-* headers in VPC Lattice policy conditions.

This solution implements TLS endpoints on VPC Lattice and Application Load Balancers. The container instances do not implement TLS in order to reduce cost for this example. As such, traffic is in cleartext between the Application Load Balancers and container instances, and can be implemented separately if required.

In these examples, I’ve configured the domain during the CDK installation as application.internal, which will be used for communicating with the application as a client. If you change this, adjust your command lines to match.

[Optional] For examples 3 and 4, you need an access token from your OAuth provider. In each of the examples, I’ve embedded the access token in the AT environment variable for brevity.

For these first two examples, you must sign in to the container host and run a command in your container. This is because the VPC Lattice policies allow traffic from the containers. I’ve assigned IAM task roles to each container, which are used to uniquely identify them to VPC Lattice when making service-to-service calls.

To set up service-to service calls (permitted):

Figure 5: Cluster console

Figure 6: Container instances

Figure 7: Single container instance

The policy statements permit app2 to call app1. By using the path app2/call-to-app1, you can force this call to occur.

Test this with the following commands:

You should see the following output:

The policy statements don’t permit app2 to call app3. You can simulate this in the same way and verify that the access isn’t permitted by VPC Lattice.

To set up service-to-service calls (denied)

You can change the curl command from Example 1 to test app2 calling app3.

If you’ve deployed using OAuth functionality, you can test from the shell in Example 1 that you’re unable to access the frontend Envoy server (application.internal) without a valid access token, and that you’re also unable to access the backend VPC Lattice services (app1.application.internal, app2.application.internal, app3.application.internal) directly.

You can also verify that you cannot bypass the VPC Lattice service and connect to the load balancer or web server container directly.

If you’ve deployed using OAuth functionality, you can test from the shell in Example 1 to access the application with a valid access token. A client can reach each application component by using application.internal/<componentname>. For example, application.internal/app2. If no component name is specified, it will default to app1.

This will fail when attempting to connect to app3 using Envoy, as we’ve denied user to service calls on the VPC Lattice Service policy

You’ve seen how you can use VPC Lattice to provide authentication and authorization to both user-to-service and service-to-service flows. I’ve shown you how to implement some novel and reusable solution components:

All of this is created almost entirely with managed AWS services, meaning you can focus more on security policy creation and validation and less on managing components such as service identities, service meshes, and other self-managed infrastructure.

Some ways you can extend upon this solution include:

I look forward to hearing about how you use this solution and VPC Lattice to secure your own applications!

Nigel is a Security, Risk, and Compliance consultant for AWS ProServe. He’s an identity nerd who enjoys solving tricky security and identity problems for some of our biggest customers in the Asia Pacific Region. He has two cavoodles, Rocky and Chai, who love to interrupt his work calls, and he also spends his free time carving cool things on his CNC machine.

source

Enable multi-admin support to manage security policies at scale with AWS Firewall Manager

These are some of the use cases and challenges faced by large enterprise organizations when scaling their security operations:

Large organizations tend to be divided into multiple organizational units, each of which represents a function within the organization. Risk appetite, and therefore security policy, can vary dramatically between organizational units. For example, organizations may support two types of users: central administrators and app developers, both of whom can administer security policy but might do so at different levels of granularity. The central admin applies a baseline and relatively generic policy for all accounts, while the app developer can be made an admin for specific accounts and be allowed to create custom rules for the overall policy. A single administrator interface limits the ability for multiple administrators to enforce differing policies for the organizational unit to which they are assigned.

The benefit of centralized management is that you can enforce a centralized policy across multiple services that the management console supports. However, organizations might have different administrators for each service. For example, the team that manages the firewall could be different than the team that manages a web application firewall solution. Aggregating administrative access that is confined to a single administrator might not adequately conform to the way organizations have services mapped to administrators.

Most security frameworks call for auditing procedures, to gain visibility into user access, types of modifications to configurations, timestamps of incremental changes, and logs for periods of downtime. An organization might want only specific administrators to have access to certain functions. For example, each administrator might have specific compliance scope boundaries based on their knowledge of a particular compliance standard, thereby distributing the responsibility for implementation of compliance measures. Single administrator access greatly reduces the ability to discern the actions of different administrators in that single account, making auditing unnecessarily complex.

Redundancy and resiliency are regarded as baseline requirements for security operations. Organizations want to ensure that if a primary administrator is locked out of a single account for any reason, other legitimate users are not affected in the same way. Single administrator access, in contrast, can lock out legitimate users from performing critical and time-sensitive actions on the management console.

In a single administrator setting, the ability to enforce the policy of least privilege is not possible. This is because there are multiple operators who might share the same levels of access to the administrator account. This means that there are certain administrators who could be granted broader access than what is required for their function in the organization.

Multi-admin support provides you the ability to use different administrator accounts to create administrative scopes for different parameters. Examples of these administrative scopes are included in the following table.

Multi-admin support helps alleviate many of the challenges just discussed by allowing administrators the flexibility to implement custom configurations based on job functions, while enforcing the principle of least privilege to help ensure that corporate policy and compliance requirements are followed. The following are some of the key benefits of multi-admin support:

Security is enhanced, given that the principle of least privilege can be enforced in a multi-administrator access environment. This is because the different administrators using Firewall Manager will be using delegated privileges that are appropriate for the level of access they are permitted. The result is that the scope for user errors, intentional errors, and unauthorized changes can be significantly reduced. Additionally, you attain an added level of accountability for administrators.

Companies with organizational units that have separate administrators are afforded greater levels of autonomy within their AWS Organizations accounts. The result is an increase in flexibility, where concurrent users can perform very different security functions.

It is easier to meet auditing requirements based on compliance standards in multi-admin accounts, because there is a greater level of visibility into user access and the functions performed on the services when compared to a multi-eyes approval workflow and approval of all policies by one omnipotent admin. This can simplify routine audits through the generation of reports that detail the chronology of security changes that are implemented by specific admins over time.

Multi-admin management support helps avoid the limitations of having a single point of access and enhances availability by providing multiple administrators with their own levels of access. This can result in fewer disruptions, especially during periods that require time-sensitive changes to be made to security configurations.

You can enable trusted access using either the Firewall Manager console or the AWS Organizations console. To do this, you sign in with your AWS Organizations management account and configure an account allocated for security tooling within the organization as the Firewall Manager administrator account. After this is done, subsequent multi-admin Firewall Manager operations can also be performed using AWS APIs. With accounts in an organization, you can quickly allocate resources, group multiple accounts, and apply governance policies to accounts or groups. This simplifies operational overhead for services that require cross-account management.

Multi-admin support in Firewall Manager unlocks several use cases pertaining to admin role-based access. The key use cases are summarized here.

In a multi-admin configuration environment, each Firewall Manager administrator’s activities are logged and recorded according to corporate compliance standards. This is useful when dealing with the troubleshooting of security incidents, and for compliance with auditing standards.

Regulations specific to a particular industry, such as Payment Card Industry (PCI), and industry-specific legislation, such as HIPAA, require restricted access, control, and separation of tasks for different job functions. Failure to adhere to such standards could result in penalties. With administrative scope extending to policy types, customers can assign responsibility for managing particular firewall policies according to user role guidelines, as specified in compliance frameworks.

Many state or federal frameworks, such as the California Consumer Privacy Act (CCPA), require that admins adhere to customized regional requirements, such as data sovereignty or privacy requirements. Multi-admin Firewall Manager support helps organizations to adopt these frameworks by making it easier to assign admins who are familiar with the regulations of a particular region to that region.

Figure 1: Use cases for multi-admin support on AWS Firewall Manager

To configure multi-admin support on Firewall Manager, use the following steps:

Figure 2: Overview of the AWS Organizations console

Figure 3: AWS Firewall Manager settings to update policy types

Figure 4: Select AWS Network Firewall as a policy type that can be managed by this administration account

The results of your selection are shown in Figure 5. The admin has been granted privileges to set AWS Network Firewall policy across all Regions and all accounts.
 

Figure 5: The admin has been granted privileges to set Network Firewall policy across all Regions and all accounts

Figure 6: The administrative scope details for the admin

In order to achieve this second use case, you choose Edit, and then add multiple sub-accounts or an OU that you need the admin restricted to, as shown in Figure 7.
 

Figure 7: Add multiple sub-accounts or an OU that you need the admin restricted to

Figure 8: Restricting admin privileges only to the US West (N California) Region

Large enterprises need strategies for operationalizing security policy management so that they can enforce policy across organizational boundaries, deal with policy changes across security services, and adhere to auditing and compliance requirements. Multi-admin support in Firewall Manager provides a framework that admins can use to organize their workflow across job roles, to help maintain appropriate levels of security while providing the autonomy that admins desire.

Mun is a Principal Security Service Specialist at AWS. Mun sets go-to-market strategies and prioritizes customer signals that contribute to service roadmap direction. Before joining AWS, Mun was a Senior Director of Product Management for a wide range of cybersecurity products at Cisco. Mun holds an MS in Cybersecurity Risk and Strategy and an MBA.

source

AWS re:Invent 2023: Security, identity, and compliance recap

At re:Invent 2023, and throughout the AWS security service announcements, there are key themes that underscore the security challenges that we help customers address through the sharing of knowledge and continuous development in our native security services. The key themes include helping you architect for zero trust, scalable identity and access management, early integration of security in the development cycle, container security enhancement, and using generative artificial intelligence (AI) to help improve security services and mean time to remediation.

To help you more efficiently manage identity and access at scale, we introduced several new features:

To help you improve your security outcomes with generative AI and automated reasoning, we introduced the following new features:

We worked closely with AWS Partners to create offerings that make it simpler for you to protect your cloud workloads:

Want more AWS Security news? Follow us on Twitter.

Nisha is a Senior Product Marketing Manager at AWS Security, specializing in detection and response solutions. She has a strong foundation in product management and product marketing within the domains of information security and data protection. When not at work, you’ll find her cake decorating, strength training, and chasing after her two energetic kiddos, embracing the joys of motherhood.

Himanshu is a Worldwide Specialist for AWS Security Services. He leads the go-to-market creation and execution for AWS security services, field enablement, and strategic customer advisement. Previously, he held leadership roles in product management, engineering, and development, working on various identity, information security, and data protection technologies. He loves brainstorming disruptive ideas, venturing outdoors, photography, and trying new restaurants.

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

source

AWS HITRUST Shared Responsibility Matrix for HITRUST CSF v11.2 now available

SRM version 1.4.2 adds support for the HITRUST Common Security Framework (CSF) v11.2 assessments in addition to continued support for previous versions of HITRUST CSF assessments v9.1–v11.2. As with the previous SRM versions v1.4 and v1.4.1, SRM v1.4.2 enables users to trace the HITRUST CSF cross-version lineage and inheritability of requirement statements, especially when inheriting from or to v9.x and 11.x assessments.

Using the HITRUST certification, you can tailor your security control baselines to a variety of factors—including, but not limited to, regulatory requirements and organization type. As part of their approach to security and privacy, leading organizations in a variety of industries have adopted the HITRUST CSF.

The new AWS HITRUST SRM version 1.4.2 has been tailored to reflect both the Cross Version ID (CVID) and Baseline Unique ID (BUID) in the CSF object so that you can select the correct control for inheritance even if you’re still using an older version of the HITRUST CSF for your own assessment. As an additional benefit, the AWS HITRUST Inheritance Program also supports the control inheritance of AWS cloud-based workloads for new HITRUST e1 and i1 assessment types, in addition to the validated r2-type assessments offered through HITRUST.

Mark is the Program Manager for the AWS HITRUST Security Assurance Program. He has over 10 years of experience in the healthcare industry holding director-level IT and security positions both within hospital facilities and enterprise-level positions supporting greater than 30,000 user healthcare environments. Mark has been involved with HITRUST as both an assessor and validated entity for over 9 years.

source

AWS completes the 2023 South Korea CSP Safety Assessment Program

The audit scope of the 2023 assessment covered data center facilities in four Availability Zones (AZ) of the AWS Asia Pacific (Seoul) Region and the services that are available in that Region. The audit program assessed different security domains including security policies, personnel security, risk management, business continuity, incident management, access control, encryption, and physical security.

If you have feedback about this post, submit comments in th Comments section below.

Andy is the Customer Audit Lead for APJ, based in Singapore. He is responsible for all customer audits in the Asia Pacific region. Andy has been with Security Assurance since 2020 and has delivered key audit programs in Hong Kong, India, Indonesia, South Korea, and Taiwan.

source

AWS Customer Compliance Guides now publicly available

CCGs offer security guidance mapped to 16 different compliance frameworks for more than 130 AWS services and integrations. Customers can select from the frameworks and services available to see how security “in the cloud” applies to AWS services through the lens of compliance.

CCGs focus on security topics and technical controls that relate to AWS service configuration options. The guides don’t cover security topics or controls that are consistent across AWS services or those specific to customer organizations, such as policies or governance. As a result, the guides are shorter and are focused on the unique security and compliance considerations for each AWS service.

CCGs provide summaries of the user guides for AWS services and map configuration guidance to security control requirements from the following frameworks:

CCGs can help customers in the following ways:

Kevin is a Senior Security Partner Strategist in AWS World Wide Public Sector, specializing in helping customers meet their compliance goals. Kevin began his tenure with AWS in 2019 supporting U.S. government customers in AWS Security Assurance. He is based in Northern Virginia and enjoys spending time outdoors with his wife and daughter outside of work.

source

How to migrate your on-premises domain to AWS Managed Microsoft AD using ADMT

February 2, 2024: We’ve updated this post to fix broken links and added a note on migrating passwords.

You can now use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate your self-managed AD to AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. This enables you to migrate AD objects and encrypted passwords for your users more easily.

In this post, we will show you how to migrate your existing AD objects to AWS Managed Microsoft AD. The source of the objects can be your self-managed AD running on EC2, on-premises, co-located, or even another cloud provider. We will show how to use ADMT and PES to migrate objects including users (and their passwords), groups, and computers.

The post assumes you are familiar with AD and how to use the Remote Desktop Protocol client to sign and use EC2 Windows instances.

In this post, we will migrate user and computer objects, as well as passwords, to a new AWS Managed Microsoft AD directory. The source will be an on-premises domain.

This example migration will be for a fairly simple use case. Large customers with complex source domains or forests may have more complex processes involved to map users, groups, and computers to the single OU structure of AWS Managed Microsoft AD. For example, you may want to migrate an OU at a time. Customers with single domain forests may be able to migrate in fewer steps. Similarly, the options you might select in ADMT will vary based on what you are trying to accomplish.

To perform the migration, we will use the Admin user account from the AWS Managed Microsoft AD. AWS creates the Admin user account and delegates administrative permissions to the account for an organizational unit (OU) in the AWS Managed Microsoft AD domain. This account has most of the permissions required to manage your domain, and all the permissions required to complete this migration.

In this example, we have a Source domain called source.local that’s running in a 10.0.0.0/16 network range, and we want to migrate users, groups, and computers to a destination domain in AWS Managed Microsoft AD called destination.local that’s running in a network range of 192.168.0.0/16.

To migrate users from source.local to destination.local, we need a migration computer that we join to the destination.local domain on which we will run ADMT. We also use this machine to perform administrative tasks on the AWS Managed Microsoft AD. As a prerequisite for ADMT, we must install Microsoft SQL Express 2019 on the migration computer. We also need an administrative account that has permissions in both the source and destination AD domains. To do this, we will use an AD trust and add the AWS Managed Microsoft AD admin account from destination.local to the source.local domain. Next we will install ADMT on the migration computer, and run PES on one of the source.local domain controllers. Finally, we will migrate the users and computers.

For this example, we have a handful of users, groups, and computers, shown in the source domain in these screenshots, that we will migrate:

Figure 1: Example source users

Figure 2: Example client computers

In the remainder of this post, we will show you how to do the migration in 5 main steps:

The ADMT tool should be installed on a computer that isn’t the domain controller in the destination domain destination.local. For this, we will launch an EC2 instance in the same VPC as the domain controller and we will add it to the destination.local domain using the EC2 seamless domain join feature. This will act as the ADMT transfer machine.

Figure 3: the “Administrator’s Properties” dialog box

Next, we need to install SQL Express and ADMT on the migration computer by following these steps.

Figure 4: Specify the “Database (ServerInstance)”

Figure 5: The “Database Import” dialog box

We’ll use PES to take care of encrypted password synchronization. Before we configure that, we need to create an encryption key that will be used during this process to encrypt the password migration.

Here’s an example:

Note: If you get an error stating that the command is not found, close and reopen Command Prompt to refresh the path locations to the ADMT executable, and then try again.

Figure 6: Start the Password Export Server Service

Figure 7: List of migration options

In this example, we will place them in Users OU:

Figure 8: The “Account Transition Options” dialog

Figure 9: Common user options

Figure 10: The “Conflict Management” dialog box

In our example, you can see that our 3 users, and any groups they were members of, have been migrated.

Figure 11: The “Migration Progress” window

We can verify this by checking that the users exist in our destination.local domain:

Figure 12: Checking the users exist in the destination.local domain

Now, we’ll move on to computer objects.

Figure 13: Four computers that will be migrated

Figure 14: The “Translate Objects” dialog box

The migration process will show completed, but we need to make sure the entire process worked.

2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-56SQFFFJCR1.source.local

2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-IG2V2NAN1MU.source.local

2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-QKQEJHUEV27.source.local

2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-SE98KE4Q9CR.source.local

If the admin user doesn’t have access to the C$ or admin$ share on the computer in the source domain share, then then installation of the agent will fail as shown here:

2017-08-11 04:09:29 ERR2:7006 Failed to install agent on \WIN-IG2V2NAN1MU.source.local, rc=5 Access is denied.

Once the agent is installed, it will perform a domain disjoin from source.local and perform a join to desintation.local. The log file will update when this has been successful:

2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-SE98KE4Q9CR.source.local’. The new computer name is ‘WIN-SE98KE4Q9CR.destination.local’.

2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-QKQEJHUEV27.source.local’. The new computer name is ‘WIN-QKQEJHUEV27.destination.local’.

2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-56SQFFFJCR1.source.local’. The new computer name is ‘WIN-56SQFFFJCR1.destination.local’.

You can then view the new computer objects in the destination domain.

Figure 15: Confirm the computer is member of the destination.local domain

In this simple example we showed how to migrate users and their passwords, groups, and computer objects from an on premises deployment of Active Directory, to our fully AWS Managed Microsoft AD. We created a management instance on which we ran SQL Express and ADMT, we created a forest trust to grant permissions for an account to use ADMT to move users, we configured ADMT and the PES tool, and then stepped through the migration using ADMT.

The ADMT tool gives us a great way to migrate to our managed Microsoft AD service that allows powerful customization of the migration, and it does so in a more secure way through encrypted password synchronization. You may need to do additional investigation and planning if the complexity of your environment requires a different approach with some of these steps.

Austin is a Cloud Support Engineer specializing in enterprise applications on AWS. He holds subject matter expert accreditations for WorkSpaces and FSx for ONTAP. Outside of work he enjoys playing video games, traveling, and fishing.

source

How to automate rule management for AWS Network Firewall

For this walkthrough, the following prerequisites must be met:

Figure 1 describes how the anfw-automate solution uses the distributed firewall rule configurations to simplify rule management for multiple teams. The rules are validated, transformed, and stored in the central AWS Network Firewall policy. This solution isolates the rule generation to the spoke AWS accounts, but still uses a shared firewall policy and a central ANFW for traffic filtering. This approach grants the AWS spoke account owners the flexibility to manage their own firewall rules while maintaining the accountability for their rules in the firewall policy. The solution enables the central security team to validate and override user defined firewall rules before pushing them to the production firewall policy. The security team operating the central firewall can also define additional rules that are applied to all spoke accounts, thereby enforcing organization-wide security policies. The firewall rules are then compiled and applied to Network Firewall in seconds, providing near real-time response in scenarios involving critical security incidents.

Figure 1: Workflow launched by uploading a configuration file to the configuration (config) bucket

The Network Firewall firewall endpoints and anfw-automate solution are both deployed in the central account. The spoke accounts use the application for rule automation and the Network Firewall for traffic inspection.

As shown in Figure 1, each spoke account contains the following:

In the central account:

The input validations make sure that rules defined by one spoke account don’t impact the rules from other spoke accounts. The validations applied to the firewall rules can be updated and managed as needed based on your requirements. The rules created must follow a strict format, and deviation from the preceding rules will lead to the rejection of the request.

The function makes cross-Region calls to Network Firewall based on the Region provided in the user configuration. There is no need to deploy the RuleExecute and RuleCollect Lambda functions in multiple Regions unless a use case warrants it.

The following section guides you through the deployment of the rules management engine.

In this phase, you deploy the application pipeline in the resource account. The pipeline is responsible for deploying multi-Region cross-account CDK stacks in both the central account and the delegated administrator account.

The application pipeline stack deploys three stacks in all configured Regions: LambdaStack and ServerlessStack in the central account and StacksetStack in the delegated administrator account. It’s recommended to deploy these stacks solely in the primary Region, given that the solution can effectively manage firewall policies across all supported Regions.

Figure 2: CloudFormation stacks deployed by the application pipeline

You can also deploy the spoke account stack manually for testing using the AWS CloudFormation template in templates/spoke-serverless-stack.yaml. This will create and configure the needed spoke account resources.

Figure 3: Example output of application pipeline deployment

After deploying the solution, each spoke account is required to configure stateful rules for every VPC in the configuration file and upload it to the S3 bucket. Each spoke account owner must verify the VPC’s connection to the firewall using the centralized deployment model. The configuration, presented in the YAML configuration language, might encompass multiple rule definitions. Each account must furnish one configuration file per VPC to establish accountability and non-repudiation.

Now that you’ve deployed the solution, follow the next steps to verify that it’s completed as expected, and then test the application.

Figure 4: Example configuration file for eu-west-1 Region

Figure 5: Example of logs generated by the anfw-automate in a spoke account

Figure 6: Rules created in Network Firewall rule group based on the configuration file in Figure 4

To avoid incurring future charges, remove all stacks and instances used in this walkthrough.

This solution simplifies network security by combining distributed ANFW firewall configurations in a centralized policy. Automated rule management can help reduce operational overhead, reduces firewall change request completion times from minutes to seconds, offloads security and operational mechanisms such as input validation, state-management, and request throttling, and enables central security teams to enforce global firewall rules without compromising on the flexibility of user-defined rulesets.

Ajinkya is a Security Consultant at Amazon Professional Services, specializing in security consulting for AWS customers within the automotive industry since 2019. He has presented at AWS re:Inforce and contributed articles to the AWS Security blog and AWS Prescriptive Guidance. Beyond his professional commitments, he indulges in travel and photography.

Stephan is a Security Consultant working for automotive customers at AWS Professional Services. He is a technology enthusiast and passionate about helping customers gain a high security bar in their cloud infrastructure. When Stephan isn’t working, he’s playing volleyball or traveling with his family around the world.

source

2023 C5 Type 2 attestation report available, including two new Regions and 170 services in scope

AWS has added the following 16 services to the current C5 scope:

AWS strives to continuously bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. If you have questions or feedback about C5 compliance, reach out to your AWS account team.

If you have feedback about this post, submit comments in the Comments section below.

Julian is a Manager in AWS Security Assurance based in Berlin, Germany. He leads third-party security audits across Europe and specifically the DACH region. He has previously worked as an information security department lead of an accredited certification body and has multiple years of experience in information security and security assurance and compliance.

Andreas is a Senior Manager in Security Assurance at AWS, based in Frankfurt, Germany. His team is responsible for third-party and customer audits, attestations, certifications, and assessments across EMEA. Previously, he was a CISO in a DAX-listed telecommunications company in Germany. He also worked for different consulting companies managing large teams and programs across multiple industries and sectors.

source

PO Box 4942 Greenville, SC 29609

Infrastructure Security News

© Cloud Level | All rights reserved | made on a by