分享

(转)Hive 基本命令操作(二)

 monday小屋 2016-10-19

hive操作

 

一、创建元数据保存在Mysql中

 

1)修改配置文件

 <!--add by gaojingsong -->
        <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.1.100:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
   <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>root</value>
    <description>password to use against metastore database</description>
  </property>
    <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
<name>javax.jdo.mapping.Schema</name>
<value>HIVE</value>
</property>

2)执行初始化语句

[root@hadoop0 bin]# schematool -dbType mysql -initSchema  -userName root -passWord root -verbose

beeline>
Initialization script completed
schemaTool completed

 

二、启动hive
[root@hadoop0 bin]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hive2.0/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop272/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www./codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/opt/bigdata/hive2.0/conf/hive-log4j2.properties
Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From hadoop0/192.168.1.111 to hadoop0:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki./hadoop/ConnectionRefused

 

解决方案:启动Hadoop

[root@hadoop0 bin]# cd ../../hadoop272/sbin/
[root@hadoop0 sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
99/06/15 09:10:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop0]
hadoop0: starting namenode, logging to /opt/bigdata/hadoop272/logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/bigdata/hadoop272/logs/hadoop-root-datanode-hadoop0.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/bigdata/hadoop272/logs/hadoop-root-secondarynamenode-hadoop0.out
99/06/15 09:10:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /opt/bigdata/hadoop272/logs/yarn-root-resourcemanager-hadoop0.out
localhost: starting nodemanager, logging to /opt/bigdata/hadoop272/logs/yarn-root-nodemanager-hadoop0.out

验证JPS进程
[root@hadoop0 sbin]# jps
4946 NodeManager
4689 SecondaryNameNode
4847 ResourceManager
4510 DataNode
4411 NameNode
5159 Jps
[root@hadoop0 sbin]# cd ../../hive2.0/bin/
[root@hadoop0 bin]# ./hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hive2.0/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop272/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www./codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/opt/bigdata/hive2.0/conf/hive-log4j2.properties
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases.
hive>

 

三、执行建表语句的两种方法

1)在hive之外:hive -f  xx.sql

2)  在hive之内:source xxx.sql

[root@hadoop0 ~]# cat  /tmp/s.sql
CREATE TABLE student (id int, name string);
[root@hadoop0 ~]# cat  /tmp/teacher.sql
CREATE TABLE teacher (id int, name string);


Time taken: 0.136 seconds
hive> show tables;
OK
Time taken: 0.21 seconds
hive> source /tmp/s.sql ;
OK
Time taken: 4.527 seconds
hive> show tables;
OK
student
Time taken: 0.162 seconds, Fetched: 1 row(s)
hive>
[root@hadoop0 ~]# hive -f /tmp/teacher.sql
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hive2.0/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop272/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www./codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/opt/bigdata/hive2.0/conf/hive-log4j2.properties
OK
Time taken: 7.664 seconds
[root@hadoop0 ~]#
hive> show tables;
OK
student
teacher
Time taken: 0.896 seconds, Fetched: 2 row(s)

 

显示hive默认值

hive> set;

system:sun.java.command=org.apache.hadoop.util.RunJar /opt/bigdata/hive2.0/lib/hive-cli-2.0.1.jar org.apache.hadoop.hive.cli.CliDriver
system:sun.java.launcher=SUN_STANDARD
system:sun.jnu.encoding=UTF-8
system:sun.management.compiler=HotSpot Client Compiler
system:sun.os.patch.level=unknown
system:user.country=US
system:user.dir=/opt/bigdata/hive2.0/bin
system:user.home=/root
system:user.language=en
system:user.name=root
system:user.timezone=PRC

 // 打印列名
hive> set hive.cli.print.header=true;

 

创建分区表
hive> CREATE TABLE tt(
SecurityID STRING,
tradeTime STRING,
PreClosePx DOUBLE
) PARTITIONED BY (tradeDate INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

显示表结构
hive> DESCRIBE student;
OK
id                      int                                        
name                    string                                     
Time taken: 1.036 seconds, Fetched: 2 row(s)
展示表中有多少分区:
hive> show partitions student;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table student is not a partitioned table

展示表中有多少分区:(没有修改表分区,所以没有显示分区信息)
hive> show partitions tt;
OK
Time taken: 0.449 seconds
展示建表语句
hive> show create table student;
OK
CREATE TABLE `student`(
  `id` int,
  `name` string)
ROW FORMAT SERDE
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://hadoop0:9000/user/hive/warehouse/student'
TBLPROPERTIES (
  'transient_lastDdlTime'='929409669')
Time taken: 0.697 seconds, Fetched: 13 row(s)

展示建表语句
hive> show create table tt;
OK
CREATE TABLE `tt`(
  `securityid` string,
  `tradetime` string,
  `preclosepx` double)
PARTITIONED BY (
  `tradedate` int)
ROW FORMAT SERDE
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'field.delim'=',',
  'serialization.format'=',')
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://hadoop0:9000/user/hive/warehouse/tt'
TBLPROPERTIES (
  'transient_lastDdlTime'='929410058')
Time taken: 0.094 seconds, Fetched: 19 row(s)

hive> alter table tt add partition(tradedate=20160720) location '/tmp';
OK
Time taken: 0.256 seconds

hive> show partitions tt;
OK
tradedate=20160720
Time taken: 0.337 seconds, Fetched: 1 row(s)
hive> alter table tt add partition(tradedate=20160721) location '/tmp';
OK
Time taken: 0.256 seconds
hive> alter table tt add partition(tradedate=20160722) location '/tmp';
OK
Time taken: 0.259 seconds
hive> alter table tt add partition(tradedate=90160722) location '/tmp';
OK
Time taken: 0.235 seconds
hive> show partitions tt;
OK
tradedate=20160720
tradedate=20160721
tradedate=20160722
tradedate=90160722
hive> alter table tt drop partition(tradedate=90160722)

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多