25 min read

Cloud Elevate PCA Certification Preparation - Case Study Questions


  • [[TerramEarth]]

  • The [[TerramEarth]] development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?
    • ^^Use [[Google App Engine]] with Google [[Cloud Endpoints]]. Focus on an API for dealers and partners^^
    • Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public
    • Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public
    • Use Google Container Engine with a Django Python container. Focus on an API for the public
    • Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners
  • Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data. What should you do?
    • ^^Build or leverage an [[OAuth]]-compatible access control system^^
    • Build SAML 2.0 SSO compatibility into your authentication system
    • Restrict data access based on the source IP address of the partner systems
    • Create secondary credentials for each dealer that can be given to the trusted third party
  • [[TerramEarth]] plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour. How should you design the data ingestion?
    • Vehicles write data directly to GCS
    • ^^Vehicles write data directly to Google [[Cloud Pub/Sub]]^^
    • Vehicles stream data directly to Google BigQuery
    • Vehicles continue to write data using the existing system ([[FTP]])
  • You analyzed [[TerramEarth]]'s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer's wait time for parts. You decided to focus on reduction of the 3 weeks aggregate reporting time. Which modifications to the company's processes should you recommend?
    • Migrate from CSV to binary format, migrate from [[FTP]] to SFTP transport, and develop machine learning analysis of metrics
    • Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics
    • ^^Increase fleet cellular connectivity to 80%, migrate from [[FTP]] to streaming transport, and develop machine learning analysis of metrics^^
    • Migrate from [[FTP]] to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor
  • Which of [[TerramEarth]]'s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
    • Opex/capex allocation, LAN changes, capacity planning
    • ^^Capacity planning, TCO calculations, opex/capex allocation^^
    • Capacity planning, utilization measurement, data center expansion
    • Data Center expansion, TCO calculations, utilization measurement
  • To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the [[ETL]] process. The current [[FTP]] process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?
    • Use one Google Container Engine cluster of [[FTP]] servers. Save the data to a Multi-Regional bucket. Run the [[ETL]] process using data in the bucket
    • Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket
    • Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket
    • ^^Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google [[APIs]] over HTTP(S). Run the [[ETL]] process to retrieve the data from each Regional bucket^^
  • [[TerramEarth]]'s 20 million vehicles are scattered around the world. Based on the vehicle's location, its telemetry data is stored in a Google [[Cloud Storage]] (GCS) regional bucket (US, Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?
    • Move all the data into 1 zone, then launch a [[Cloud Dataproc]] cluster to run the job
    • Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job
    • Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job
    • ^^Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a [[Cloud Dataproc]] cluster to finish the job^^
  • [[TerramEarth]] has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?
    • Have the vehicle's computer compress the data in hourly snapshots, and store it in a Google [[Cloud Storage]] (GCS) Nearline bucket
    • Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google [[BigQuery]]
    • Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in [[Cloud Bigtable]]
    • ^^Have the vehicle's computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket^^
  • Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architectures should you consider? (Choose two.)
    • ^^Treat every micro service call between modules on the vehicle as untrusted.^^
    • Require IPv6 for connectivity to ensure a secure address space.
    • ^^Use a trusted platform module (TPM) and verify firmware and binaries on boot.^^
    • Use a functional programming language to isolate code execution cycles.
    • Use multiple connectivity subsystems for redundancy.
    • Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.
  • Operational parameters such as oil pressure are adjustable on each of [[TerramEarth]]'s vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field. How can you accomplish this goal?
    • Have you engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically
    • ^^Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically^^
    • Implement a Google [[Cloud Dataflow]] streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically
    • Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud [[Machine Learning]] (ML) Platform to make operational adjustments automatically
  • For this question, refer to the [[TerramEarth]] case study. To be compliant with European [[GDPR]] regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both [[Cloud Storage]] and [[BigQuery]]. What should you do?
    • Create a [[BigQuery]] table for the European data, and set the table retention period to 36 months. For [[Cloud Storage]], use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
    • Create a [[BigQuery]] table for the European data, and set the table retention period to 36 months. For [[Cloud Storage]], use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
    • ^^Create a [[BigQuery]] time-partitioned table for the European data, and set the partition expiration period to 36 months. For [[Cloud Storage]], use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.^^
    • Create a [[BigQuery]] time-partitioned table for the European data, and set the partition expiration period to 36 months. For [[Cloud Storage]], use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months
  • For this question, refer to the [[TerramEarth]] case study. TerramEarth has decided to store data files in [[Cloud Storage]]. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost. Which two actions should you take?
    • ^^Create a [[Cloud Storage]] lifecycle rule with Age: ג€30ג€, [[Storage Class]]: ג€Standardג€, and Action: ג€Set to Coldlineג€, and create a second GCS life-cycle rule with Age: ג€365ג€, Storage Class: ג€Coldlineג€, and Action: ג€Deleteג€.^^
    • Create a [[Cloud Storage]] lifecycle rule with Age: ג€30ג€, [[Storage Class]]: ג€Coldlineג€, and Action: ג€Set to Nearlineג€, and create a second GCS life-cycle rule with Age: ג€91ג€, Storage Class: ג€Coldlineג€, and Action: ג€Set to Nearlineג€.
    • Create a [[Cloud Storage]] lifecycle rule with Age: ג€90ג€, [[Storage Class]]: ג€Standardג€, and Action: ג€Set to Nearlineג€, and create a second GCS life-cycle rule with Age: ג€91ג€, Storage Class: ג€Nearlineג€, and Action: ג€Set to Coldlineג€.
    • Create a [[Cloud Storage]] lifecycle rule with Age: ג€30ג€, [[Storage Class]]: ג€Standardג€, and Action: ג€Set to Coldlineג€, and create a second GCS life-cycle rule with Age: ג€365ג€, Storage Class: ג€Nearlineג€, and Action: ג€Deleteג€.
  • For this question, refer to the [[TerramEarth]] case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, xTerramEarth. Considering the TerramEarth business and technical requirements, what should you do?
    • ^^Replace the existing data warehouse with [[BigQuery]]. Use table [[partitioning]].^^
    • Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.
    • Replace the existing data warehouse with [[BigQuery]]. Use federated data sources.
    • Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine preemptible instance with 32 CPUs.
  • For this question, refer to the [[TerramEarth]] case study. A new architecture that writes all incoming data to [[BigQuery]] has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost. What should you do?
    • Set up a streaming [[Cloud Dataflow]] job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
    • Create a Cloud Function that reads data from [[BigQuery]] and cleans it. Trigger the Cloud Function from a Compute Engine instance.
    • Create a SQL statement on the data in [[BigQuery]], and save it as a view. Run the view daily, and save the result to a new table.
    • ^^Use [[Cloud Dataprep]] and configure the [[BigQuery]] tables as the source. Schedule a daily job to clean the data.^^
  • For this question, refer to the [[TerramEarth]] case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?
    • ^^Use [[BigQuery]] as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using [[Cloud Pub/Sub]] and [[Cloud Dataflow]]. Use Google [[Data Studio]] for analysis and reporting.^^
    • Use [[BigQuery]] as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional [[Cloud Storage]] bucket using gcloud. Use Google [[Data Studio]] for analysis and reporting.
    • Use [[Cloud Dataproc]] Hive as the data warehouse. Upload gzip files to a Multi-Regional [[Cloud Storage]] bucket. Upload this data into [[BigQuery]] using gcloud. Use Google [[Data Studio]] for analysis and reporting.
    • Use [[Cloud Dataproc]] Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig scripts to analyze data.
  • For this question, refer to the [[TerramEarth]] case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices. Considering the technical requirements, which components should you use for the ingestion of the data?
    • Google [[Kubernetes]] Engine with an SSL [[Ingress]]
    • ^[1] with public/private key pairs^^
    • Compute Engine with project-wide [[SSH]] keys
    • Compute Engine with specific [[SSH]] keys
  • For this question, refer to the [[TerramEarth]] case study. You start to build a new application that uses a few [[Cloud Functions]] for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices. What should you do?
    • Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request. Pass the same token to func_query and reject the invocation if the tokens are different.
    • ^^Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query.^^
    • Make func_query 'Require authentication' and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.
    • Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account.
  • For this question, refer to the [[TerramEarth]] case study. You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on [[Cloud Run]]. You also want to make sure the services are highly available with low latency to your customers. What should you do?
    • Deploy [[Cloud Run]] services to multiple availability zones. Create [[Cloud Endpoints]] that point to the services. Create a global HTTP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.
    • ^^Deploy [[Cloud Run]] services to multiple regions. Create serverless network endpoint groups pointing to the services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance.^^
    • Deploy Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.
    • Deploy [[Cloud Run]] services to multiple availability zones. Create a [[TCP]]/IP global load balancer. Add the Cloud Run Endpoints to its backend service.
  • For this question, refer to the [[TerramEarth]] case study. You are migrating a [[Linux]]-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do? (Choose two.)
    • ^^Open a support case regarding the CVE and chat with the support engineer.^^
    • Read the CVEs from the Google Cloud Status Dashboard to understand the impact.
    • ^^Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.^^
    • Post a question regarding the CVE in Stack Overflow to get an explanation.
    • Post a question regarding the CVE in a Google Cloud discussion group to get an explanation.
  • For this question, refer to the [[TerramEarth]] case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the [[URL]] to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost. What should you do?
    • Create a scheduled job in [[Cloud Run]] to invoke a container every minute. The container will check the application [[URL]]. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
    • Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a [[Python]] program to check the application [[URL]]. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
    • Create a Cloud Monitoring uptime check to validate the application [[URL]]. If it fails, put a message in a [[Pub/Sub]] queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.
    • ^^Use [[Cloud Error Reporting]] to check the application [[URL]]. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.^^
  • For this question, refer to the [[TerramEarth]] case study. You are building a microservice-based application for TerramEarth. The application is based on [[Docker]] containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?
    • ^^Configure a trigger in [[Cloud Build]] for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry.^^
    • Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container images for the microservices. Tag the images with a version number, and push them to Cloud Storage.
    • Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build container images for the microservices. Tag the images using the current timestamp, and push them to the Container Registry.
    • Configure a trigger in [[Cloud Build]] for new source changes. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' Push the image to the Container Registry.
  • For this question, refer to the [[TerramEarth]] case study. TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to [[Cloud Storage]] for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?
    • ^^Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.^^
    • Configure the Storage Transfer service from Google Cloud to send the data from your data center to [[Cloud Storage]].
    • Make sure there are no other users consuming the 1Gbps link, and use multi-thread transfer to upload the data to [[Cloud Storage]].
    • Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to [[Cloud Storage]].

  • [[Mountkirk Games]]

  • [[Mountkirk Games]] wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?
    • ^^Tests should scale well beyond the prior approaches^^
    • Unit tests are no longer required, only end-to-end tests
    • Tests should be applied after the release is in the production environment
    • Tests should include directly testing the Google Cloud Platform (GCP) infrastructure
  • Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a through testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?
    • ^^Create a scalable environment in GCP for simulating production load^^
    • Use the existing infrastructure to test the GCP-based backend at scale
    • Build stress tests into each component of your application using resources internal to GCP to simulate load
    • Create a set of static environments in GCP to test different levels of load ג€ "for example, high, medium, and low
  • Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
    • ✑ Services are deployed redundantly across multiple regions in the US and Europe
    • ✑ Only frontend services are exposed on the public internet
    • ✑ They can provide a single frontend IP for their fleet of services
    • ✑ Deployment artifacts are immutable
    • Which set of products should they use?
      • Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine
      • Google Cloud Storage, Google App Engine, Google Network Load Balancer
      • ^^Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer^^
      • Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager
  • Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?
    • Verify that the database is online
    • ^^Verify that the project quota hasn't been exceeded^^
    • Verify that the new feature code did not introduce any performance bugs
    • Verify that the load-testing team is not running their tool against production
  • Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production. What should you do to isolate development environments from staging and production?
    • Create a project for development and test and another for staging and production
    • Create a network for development and test and another for staging and production
    • Create one subnetwork for development and another for staging and production
    • ^^Create one project for development, a second for staging and a third for production^^
  • Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?
    • Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL
    • ^^Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery^^
    • Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
    • Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
    • Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc
  • For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform. Which two steps should be part of their migration plan? (Choose two.)
    • ^^Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.^^
    • ^^Write a schema migration plan to denormalize data for better performance in BigQuery.^^
    • Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.
    • Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.
    • Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.
  • For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?
    • Create network load balancers. Use preemptible Compute Engine instances.
    • Create network load balancers. Use non-preemptible Compute Engine instances.
    • Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.
    • ^^Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.^^
  • For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which two steps should they take? (Choose two.)
    • ^^Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.^^
    • ^^Begin packaging their game backend artifacts in container images and running them on Google Kubernetes Engine to improve the ability to scale up or down based on game activity.^^
    • Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.
    • Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.
    • Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.
  • For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform's resilience to changes in mobile network latency. What should you do?
    • ^^Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.^^
    • Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.
    • Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.
    • Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.
  • For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?
    • Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.
    • Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.
    • Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.
    • ^^Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.^^
  • For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk's technical requirement for storing game activity in a time series database service?
    • ^^Cloud Bigtable^^
    • Cloud Spanner
    • BigQuery
    • Cloud Datastore
  • For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API. You want to follow Google-recommended practices. How should you design the backend?
    • Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.
    • Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.
    • ^^Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.^^
    • Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.
  • You need to optimize batch file transfers into Cloud Storage for Mountkirk Games' new Google Cloud solution. The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract transform load (ETL) tool. What should you do?
    • Use gsutil to batch move files in sequence.
    • ^^Use gsutil to batch copy the files in parallel.^^
    • Use gsutil to extract the files as the first part of ETL.
    • Use gsutil to load the files as the last part of ETL.
  • You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What should you do?
    • Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs.
    • Create a service account (SA) in the legacy game's Google Cloud project, give the SA the Organization Admin role, and then give it the Firebase Admin role in both projects.
    • ^^Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects.^^
    • Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.
  • Mountkirk Games wants to limit the physical location of resources to their operating Google Cloud regions. What should you do?
    • ^^Configure an organizational policy which constrains where resources can be deployed.^^
    • Configure IAM conditions to limit what resources can be configured.
    • Configure the quotas for resources in the regions not being used to 0.
    • Configure a custom alert in Cloud Monitoring so you can disable resources as they are created in other regions.
  • You need to implement a network ingress for a new game that meets the defined business and technical requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud regions. What should you do?
    • Configure a global load balancer connected to a managed instance group running Compute Engine instances.
    • Configure kubemci with a global load balancer and Google Kubernetes Engine.
    • Configure a global load balancer with Google Kubernetes Engine.
    • ^^Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.^^
  • Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily. You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user's perspective. What should you do?
    • Create CPU Utilization and Request Latency as service level indicators.
    • Create GKE CPU Utilization and Memory Utilization as service level indicators.
    • ^^Create Request Latency and Error Rate as service level indicators.^^
    • Create Server Uptime and Error Rate as service level indicators.
  • Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?
    • ^^Configure Workload Identity and service accounts to be used by the application platform.^^
    • Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the application platform.
    • Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by the application platform.
    • Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by the application platform.
  • Your development team has created a mobile game app. You want to test the new mobile app on Android and iOS devices with a variety of configurations. You need to ensure that testing is efficient and cost-effective. What should you do?
    • ^^Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.^^
    • Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.
    • Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on the containers, and test the mobile app.
    • Upload your mobile app with different configurations to Firebase Hosting and test each configuration.

  • [[EHR Healthcare]]

  • For this question, refer to the [[EHR Healthcare]] case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)
    • Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
    • ^^Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.^^
    • Use Firebase Authentication for EHR's user facing applications.
    • ^^Implement [[Prometheus]] to detect and prevent security breaches on EHR's web-based applications.^^
    • Use [[GKE]] private clusters for all [[Kubernetes]] workloads.
  • For this question, refer to the [[EHR Healthcare]] case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)
    • ^^Enable [[Binary Authorization]] on [[GKE]], and sign containers as part of a [[CI/CD]] pipeline.^^
    • Configure [[Jenkins]] to utilize Kritis to cryptographically sign a container as part of a [[CI/CD]] pipeline.
    • Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.
    • ^^Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.^^
  • You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?
    • ^^Add a new [[Dedicated Interconnect]] connection.^^
    • Upgrade the bandwidth on the [[Dedicated Interconnect]] connection to 100 G.
    • Add three new Cloud VPN connections.
    • Add a new Carrier Peering connection.
  • For this question, refer to the [[EHR Healthcare]] case study. You need to define the technical architecture for hybrid connectivity between EHR's onpremises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?
    • Configure two [[Partner Interconnect]] connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones
    • Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.
    • Configure Direct Peering between [[EHR Healthcare]] and Google Cloud, and make sure you are peering at least two Google locations.
    • ^^Configure two [[Dedicated Interconnect]] connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.^^
  • For this question, refer to the [[EHR Healthcare]] case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated [[Pub/Sub]] into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?
    • Increase the [[Pub/Sub]] Total Timeout retry value.
    • Move from a Pub/Sub subscriber pull model to a push model.
    • ^^Turn off [[Pub/Sub]] message batching.^^
    • Create a backup [[Pub/Sub]] message queue.
  • For this question, refer to the [[EHR Healthcare]] case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?
    • ^^Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.^^
    • Revoke the compute.networkAdmin role from all users in the project with front end instances.
    • Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.
    • Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.
  • For this question, refer to the [[EHR Healthcare]] case study. You are responsible for designing the Google Cloud network architecture for Google [[Kubernetes]] Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?
    • ^^Use a private cluster with a private endpoint with master authorized networks configured.^^
    • Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.
    • Use a private cluster with a public endpoint with master authorized networks configured.
    • Use a public cluster with master authorized networks enabled and firewall rules.

  • [[Helicopter Racing League]] (HRL)

  • For this question, refer to the [[Helicopter Racing League]] (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and season ticket holders. You need to implement a custom card tokenization service that meets the following requirements:
      • It must provide low latency at minimal cost.
      • It must be able to identify duplicate credit cards and must not store plaintext card numbers.
      • It should support annual key rotation.
    • Which storage approach should you adopt for your tokenization service?
      • Store the card data in Secret Manager after running a query to identify duplicates.
      • ^^Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.^^
      • Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.
      • Use column-level encryption to store the data in [[Cloud SQL]].
  • For this question, refer to the [[Helicopter Racing League]] (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?
    • Set up [[Cloud Tasks]] and a [[Cloud Storage]] bucket that triggers a Cloud Function.
    • Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.
    • ^^Configure the deployment job to notify a [[Pub/Sub]] queue that triggers a Cloud Function.^^
    • Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.
  • For this question, refer to the [[Helicopter Racing League]] (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google's AI Platform so HRL can understand and interpret the predictions. What should you do?
    • ^^Use Explainable AI.^^
    • Use Vision AI.
    • Use Google Cloud's operations suite.
    • Use Jupyter Notebooks.
  • For this question, refer to the [[Helicopter Racing League]] (HRL) case study. HRL is looking for a cost-effective approach for storing their race data such as telemetry. They want to keep all historical records, train models using only the previous season's data, and plan for data growth in terms of volume and information collected. You need to propose a data solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke, what should you do?
    • Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data by season and event.
    • Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using season as a primary key.
    • ^^Use [[BigQuery]] for its scalability and ability to add columns to a schema. Partition race data based on season.^^
    • Use [[Cloud SQL]] for its ability to automatically manage storage increases and compatibility with [[MySQL]]. Use separate database instances for each season.
  • For this question, refer to the [[Helicopter Racing League]] (HRL) case study. A recent finance audit of cloud infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you do?
    • Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for analysis.
    • Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.
    • ^^Use the gcloud recommender command to list the idle virtual machine instances.^^
    • From the Google Console, identify which Compute Engine instances in the managed instance groups are no longer responding to health check probes.

  1. [Cloud IoT Core] ↩︎