Winter Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpm65

CCA-500 Cloudera Certified Administrator for Apache Hadoop (CCAH) Questions and Answers

Questions 4

Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:

yarn.nodemanager.resource.memory-mb

32768

yarn.nodemanager.resource.cpu-vcores

12

You want YARN to launch no more than 16 containers per node. What should you do?

Options:

A.

Modify yarn-site.xml with the following property:

yarn.scheduler.minimum-allocation-mb

2048

B.

Modify yarn-sites.xml with the following property:

yarn.scheduler.minimum-allocation-mb

4096

C.

Modify yarn-site.xml with the following property:

yarn.nodemanager.resource.cpu-vccores

D.

No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores

Buy Now
Questions 5

Which three basic configuration parameters must you set to migrate your cluster from MapReduce 1 (MRv1) to MapReduce V2 (MRv2)? (Choose three)

Options:

A.

Configure the NodeManager to enable MapReduce services on YARN by setting the following property in yarn-site.xml:

yarn.nodemanager.hostname

your_nodeManager_shuffle

B.

Configure the NodeManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:

yarn.nodemanager.hostname

your_nodeManager_hostname

C.

Configure a default scheduler to run on YARN by setting the following property in mapred-site.xml:

mapreduce.jobtracker.taskScheduler

org.apache.hadoop.mapred.JobQueueTaskScheduler

D.

Configure the number of map tasks per jon YARN by setting the following property in mapred:

mapreduce.job.maps

2

E.

Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:

yarn.resourcemanager.hostname

your_resourceManager_hostname

F.

Configure MapReduce as a Framework running on YARN by setting the following property in mapred-site.xml:

mapreduce.framework.name

yarn

Buy Now
Questions 6

You are running a Hadoop cluster with a NameNode on host mynamenode. What are two ways to determine available HDFS space in your cluster?

Options:

A.

Run hdfs fs –du / and locate the DFS Remaining value

B.

Run hdfs dfsadmin –report and locate the DFS Remaining value

C.

Run hdfs dfs / and subtract NDFS Used from configured Capacity

D.

Connect to https://mynamenode:50070/dfshealth.jsp and locate the DFS remaining value

Buy Now
Questions 7

Your cluster’s mapred-start.xml includes the following parameters

mapreduce.map.memory.mb

4096

mapreduce.reduce.memory.mb

8192

And any cluster’s yarn-site.xml includes the following parameters

yarn.nodemanager.vmen-pmen-ration

2.1

What is the maximum amount of virtual memory allocated for each map task before YARN will kill its Container?

Options:

A.

4 GB

B.

17.2 GB

C.

8.9 GB

D.

8.2 GB

E.

24.6 GB

Buy Now
Questions 8

You have just run a MapReduce job to filter user messages to only those of a selected geographical region. The output for this job is in a directory named westUsers, located just below your home directory in HDFS. Which command gathers these into a single file on your local file system?

Options:

A.

Hadoop fs –getmerge –R westUsers.txt

B.

Hadoop fs –getemerge westUsers westUsers.txt

C.

Hadoop fs –cp westUsers/* westUsers.txt

D.

Hadoop fs –get westUsers westUsers.txt

Buy Now
Questions 9

You’re upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block size of 128MB for all new files written to the cluster after upgrade. What should you do?

Options:

A.

You cannot enforce this, since client code can always override this value

B.

Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final

C.

Set dfs.block.size to 128 M on all the worker nodes and client machines, and set the parameter to final. You do not need to set this value on the NameNode

D.

Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final

E.

Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to final. You do not need to set this value on the NameNode

Buy Now
Exam Code: CCA-500
Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH)
Last Update: Dec 4, 2024
Questions: 60

PDF + Testing Engine

$57.75  $164.99

Testing Engine

$43.75  $124.99
buy now CCA-500 testing engine

PDF (Q&A)

$36.75  $104.99
buy now CCA-500 pdf
dumpsmate guaranteed to pass
24/7 Customer Support

DumpsMate's team of experts is always available to respond your queries on exam preparation. Get professional answers on any topic of the certification syllabus. Our experts will thoroughly satisfy you.

Site Secure

mcafee secure

TESTED 04 Dec 2024