Extractor

Time Limit : 1 Second

Memory Limit : 128 MB

Submission: 341

Solved: 154

Description
Large scale data is very hard to processing with single server. With larger and larger scale data, Google introduced a large scale data processing model named Map/Reduce.

 

A Map/Reduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. A framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.(Copyed at hadoop official site, MapReduce Tutorial)

 

For a common use, the map tasks, usually just input some structured data list, separate by Tab character  (‘\t’, ASCII is 9) and Line feed character (‘\n’, ASCII is 10), and choose some row rearranged with requirement, then output it. All the processing are line-based. We called this kind of application with extractor.

 

Now we just want you to write a simple extractor.

Input
First line is a number indicate the test case number(equal or less then 50).

For each case, first line are three number, n(1<=n<=200), m, p(1<=m,p<=50), indicate line number, row number, and chose row number.

Next line has p number, indicating the chosen row index (count start with 0), ordered with we want.

Next n lines each line has m part separate by Tab character (‘\t’), each part just contain letters and digits, and the length not longer then 25. { not 20 }
Output
For each case, the first line output “Case X:” X means the X case.

For each case, output the extracted data set with N lines and each line p part separate by Tab character (‘\t’).

sample input
2
2 4 2
2 0
A	B	C	D
YXB	XY	LKQ	XH
4 3 2
0 2
Tenc	100	AC
Bidu	200	BC
RR	300	CD
Ali	400	XX
sample output
Case 1:
C	A
LKQ	YXB
Case 2:
Tenc	AC
Bidu	BC
RR	CD
Ali	XX
hint
source
Hong Zehua
© 2015 HUST ACMICPC TEAM. All Right Reserved.