sqoop导入Hive配置
数据库中经常有换行符之类的数据,导入到Hive里面会发生重大的问题,所以,sqoop是有一个配置项来解决这个问题的。
?
sqoop import --connect jdbc:oracle:thin:@url --username user --password pwd \--table PA18ODSDATA.PARTNER_INFO \--columns ID_PARTNER_INFO,PARTNER_ID,PARTNER_NAME,PROJECT_ID,PROJECT_NAME\ -m 1 --fields-terminated-by '\001' --lines-terminated-by '\n' \--hive-drop-import-delims --hive-import --hive-overwrite \--hive-table eshop.partner_info
?
使用 --query 则必须加上 --output-dir,以文本格式导出数据
如果直接导入表,则如上面的代码所示。
其中,去掉--hive-overwrite关键字,就可以给Hive表插入数据,而非覆盖数据。
?
注意,插入数据的表,必须是text表,或者是sequence表,rcfile表是不支持插入新数据的。?
?
其中,关于Hive的配置解释如下:
详见:
http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html
?
Table?14.?Hive arguments:
Argument
Description
--hive-home <dir>
Override $HIVE_HOME
--hive-import
Import tables into Hive (Uses Hive’s default delimiters if none are set.)
--hive-overwrite
Overwrite existing data in the Hive table.
--create-hive-table
If set, then the job will fail if the target hive
?table exits. By default this property is false.
--hive-table <table-name>
Sets the table name to use when importing to Hive.
--hive-drop-import-delims
Drops \n, \r, and \01 from string fields when importing to Hive.
--hive-delims-replacement
Replace \n, \r, and \01 from string fields with user defined string when importing to Hive.
--hive-partition-key
Name of a hive field to partition are sharded on
--hive-partition-value <v>
String-value that serves as partition key for this imported into hive in this job.
--map-column-hive <map>
Override default mapping from SQL type to Hive type for configured columns.
?