dimanche 11 mai 2014

Gestionnaire de Cloudera Embedded PostgreSQL ruche Metastore serveur OutOfMemoryError question - débordement de pile


I'm using:


Cloudera Manager Free Edition: 4.5.1
Cloudera Hadoop Distro: CDH 4.2.0-1.cdh4.2.0.p0.10 (Parcel)
Hive Metastore with cloudera manager embedded PostgreSQL database.

My cloudera manager is running on a separate machine and it's not part of the cluster.


After setting up the cluster using cloudera manager, I started using hive through hue + beeswax.


Everything was running fine for a while and then all of a suddden, whenever I ran any query against a particular table that had a large number of partitions (about 14000), the query started to time out:


FAILED: SemanticException org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out

When I noticed this, I looked at the logs and found out that the connection to the Hive Metastore was timing out:


WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out

Having seen this, I thought there was a problem with the hive metastore. So I looked at the logs for the hive metastore and discovered java.lang.OutOfMemoryErrors:


/var/log/hive/hadoop-cmf-hive1-HIVEMETASTORE-hci-cdh01.hcinsight.net.log.out:

2013-05-07 14:13:08,744 ERROR org.apache.thrift.ProcessFunction: Internal error processing get_partitions_ with_auth
java.lang.OutOfMemoryError: Java heap space
at sun.reflectH.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.jav a:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.datanucleus.util.ClassUtils.newInstance(ClassUtils.java:95)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newLiteralParameter(SQLExpressi onFactory.java:248)
at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.getSQLStatementForIterator(RDBMSMapE ntrySetStore.java:323)
at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.iterator(RDBMSMapEntrySetStore.java: 221)
at org.datanucleus.sco.SCOUtils.populateMapDelegateWithStoreData(SCOUtils.java:987)
at org.datanucleus.sco.backed.Map.loadFromStore(Map.java:258)
at org.datanucleus.sco.backed.Map.keySet(Map.java:509)
at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j ava:118)
at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel dManager.java:114)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.ja va)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.j ava)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16 47)
at org.datanucleus.store.fieldmanager.LoadFieldManager.processPersistable(LoadFieldManager.java:63 )
at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j ava:84)
at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel dManager.java:104)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceField(MPartition.java)
at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceFields(MPartition.java)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16 47)
at org.datanucleus.ObjectManagerImpl.performDetachAllOnTxnEndPreparation(ObjectManagerImpl.java:35 52)
at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3291)
at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:369)
at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256)

At this point, the hive metastore gets shutdown and restarted:


2013-05-07 14:39:40,576 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Shutting down hive metastore.
2013-05-07 14:41:09,979 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Starting hive metastore on po rt 9083

Now, to fix this, I've changed the max heap size of both the hive metastore server and the beeswax server:


1. Hive/Hive Metastore Server(Base)/Resource Management/Java Heap Size of Metastore Server : 2 GiB (First thing I did.)
2. Hue/Beeswax Server(Base)/Resource Management/Java Heap Size of Beeswax Server : 2 GiB (After reading some groups posts and stuff online, I tried this as well.)

Neither of these above 2 steps seem to have helped as I continue to see OOMEs in the hive metastore log.


Then I noticed that the actual metastore 'database' is being run as part of my cloudera manager and I'm wondering if that PostgreSQL process is running out of memory. I looked for ways to increase the java heap size for that process and found very little documentation regarding that.


I was wondering if one of you guys could help me solve this issue.


Should I increase the java heap size for the embedded database? If so, where would I do this?


Is there something else that I'm missing?


Thanks!



I'm using:


Cloudera Manager Free Edition: 4.5.1
Cloudera Hadoop Distro: CDH 4.2.0-1.cdh4.2.0.p0.10 (Parcel)
Hive Metastore with cloudera manager embedded PostgreSQL database.

My cloudera manager is running on a separate machine and it's not part of the cluster.


After setting up the cluster using cloudera manager, I started using hive through hue + beeswax.


Everything was running fine for a while and then all of a suddden, whenever I ran any query against a particular table that had a large number of partitions (about 14000), the query started to time out:


FAILED: SemanticException org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out

When I noticed this, I looked at the logs and found out that the connection to the Hive Metastore was timing out:


WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out

Having seen this, I thought there was a problem with the hive metastore. So I looked at the logs for the hive metastore and discovered java.lang.OutOfMemoryErrors:


/var/log/hive/hadoop-cmf-hive1-HIVEMETASTORE-hci-cdh01.hcinsight.net.log.out:

2013-05-07 14:13:08,744 ERROR org.apache.thrift.ProcessFunction: Internal error processing get_partitions_ with_auth
java.lang.OutOfMemoryError: Java heap space
at sun.reflectH.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.jav a:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.datanucleus.util.ClassUtils.newInstance(ClassUtils.java:95)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newLiteralParameter(SQLExpressi onFactory.java:248)
at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.getSQLStatementForIterator(RDBMSMapE ntrySetStore.java:323)
at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.iterator(RDBMSMapEntrySetStore.java: 221)
at org.datanucleus.sco.SCOUtils.populateMapDelegateWithStoreData(SCOUtils.java:987)
at org.datanucleus.sco.backed.Map.loadFromStore(Map.java:258)
at org.datanucleus.sco.backed.Map.keySet(Map.java:509)
at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j ava:118)
at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel dManager.java:114)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.ja va)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.j ava)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16 47)
at org.datanucleus.store.fieldmanager.LoadFieldManager.processPersistable(LoadFieldManager.java:63 )
at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j ava:84)
at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel dManager.java:104)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceField(MPartition.java)
at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceFields(MPartition.java)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16 47)
at org.datanucleus.ObjectManagerImpl.performDetachAllOnTxnEndPreparation(ObjectManagerImpl.java:35 52)
at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3291)
at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:369)
at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256)

At this point, the hive metastore gets shutdown and restarted:


2013-05-07 14:39:40,576 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Shutting down hive metastore.
2013-05-07 14:41:09,979 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Starting hive metastore on po rt 9083

Now, to fix this, I've changed the max heap size of both the hive metastore server and the beeswax server:


1. Hive/Hive Metastore Server(Base)/Resource Management/Java Heap Size of Metastore Server : 2 GiB (First thing I did.)
2. Hue/Beeswax Server(Base)/Resource Management/Java Heap Size of Beeswax Server : 2 GiB (After reading some groups posts and stuff online, I tried this as well.)

Neither of these above 2 steps seem to have helped as I continue to see OOMEs in the hive metastore log.


Then I noticed that the actual metastore 'database' is being run as part of my cloudera manager and I'm wondering if that PostgreSQL process is running out of memory. I looked for ways to increase the java heap size for that process and found very little documentation regarding that.


I was wondering if one of you guys could help me solve this issue.


Should I increase the java heap size for the embedded database? If so, where would I do this?


Is there something else that I'm missing?


Thanks!


0 commentaires:

Enregistrer un commentaire