linux - Hadoop daemons can't stop using proper command -


running hadoop system run daemon jobs namenode, journalnode, etc. use namenode example.

when start namenode can use command: hadoop-daemon.sh start namenode

when stop namenode can use command: hadoop-daemon.sh stop namenode.

but here comes question, if start namenode yesterday or couple of hours ago, stop command work fine. if namenode has been working 1 month. when using stop command, show:

no namenode stop.

but can see daemon namenode running using command jps. have use kill command kill process.

why happen? way make sure stop command can work?

thanks

the reason why hadoop-daemon.sh not working after time because in hadoop-env.sh there parameters called: export hadoop_pid_dir export hadoop_secure_dn_pid_dir stored pid number of daemons. default location of directory /tmp. problem /tmp folder automatically cleaned after sometime(red hat linux). in case pid file deleted, when run daemon command, command can't find process id stored in file. same reson yarn-daemon.sh command.

modify hadoop-env.sh: hadoop_pid_dir hadoop_secure_dn_pid_dir yarn-env.sh: yarn_pid_dir mapred-env.sh: hadoop_mapred_pid_dir other directories instead of using default /tmp folder should solve problem. after modification, restart processes related that. security concern, folder contains pid should not able accessed other non-admin users.


Comments

Popular posts from this blog

ruby - Trying to change last to "x"s to 23 -

jquery - Clone last and append item to closest class -

c - Unrecognised emulation mode: elf_i386 on MinGW32 -