Discussion:
docker-proxy <defunct> issue
Thorsten Sideboard
2015-06-19 15:49:21 UTC
Permalink
hey there,

Wondering if anyone can possibly offer advice on a Docker issue I'm having.
The reason i ask here is that I wonder if it's related to me not
having tuned my CoreOS instances properly.

(I've just posted this on the docker-dev mailing list too, and had a
previous conversation on the kubernetes/google-container list)

I have a CoreOS cluster of 50 machines, stable 681.0.0, some now
upgraded to 681.2.0. I'm running Kubernetes atop the cluster to
distribute a new service.

The new service exposes two host ports, 9090 and 1443. Port 9090 is
the health check service, and is being contacted by a fleet of 500
servers (not CoreOS) to check the service availability, every n
seconds. I have fifty of these machines in the new service, and I'm
having approximately 20 fail a day, requiring a Docker restart.

The actual service is still running fine, and I can exec inside the
container and still access port 9090, however it's not longer being
exposed on the host.

A healthy instance shows both ports being exposed via docker-proxy e.g.

docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip
10.0.43.4 -container-port 9090
docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1443 -container-ip
10.0.43.4 -container-port 1443

and the issue is that the docker-proxy process simply dies, showing
[docker] <defunct>

The hosts show a log of open network ports due to the health check traffic -
TCP: 45728 (estab 16, closed 45685, orphaned 0, synrecv 0, timewait
45233/0), ports 0

Not terribly so, but enough to make me wonder if I'm hitting a resource limit.

There is nothing obvious in journald, related to Docker or anything
about resource starvation, but I'm new enough to CoreOS/Journald that
I wonder if I'm not looking in all the correct places. Anyone have
advice on helping track down the issue?

thanks!
thorsten
Zhenyan Zhu
2018-06-29 22:05:19 UTC
Permalink
Hi Thor, I just hit the same issue as this one you posted, and happen to
found this post from google search. Did you resolve that docker-proxy
defunct issue eventually? Thank you for sharing! - Alex
Post by Thorsten Sideboard
hey there,
Wondering if anyone can possibly offer advice on a Docker issue I'm having.
The reason i ask here is that I wonder if it's related to me not
having tuned my CoreOS instances properly.
(I've just posted this on the docker-dev mailing list too, and had a
previous conversation on the kubernetes/google-container list)
I have a CoreOS cluster of 50 machines, stable 681.0.0, some now
upgraded to 681.2.0. I'm running Kubernetes atop the cluster to
distribute a new service.
The new service exposes two host ports, 9090 and 1443. Port 9090 is
the health check service, and is being contacted by a fleet of 500
servers (not CoreOS) to check the service availability, every n
seconds. I have fifty of these machines in the new service, and I'm
having approximately 20 fail a day, requiring a Docker restart.
The actual service is still running fine, and I can exec inside the
container and still access port 9090, however it's not longer being
exposed on the host.
A healthy instance shows both ports being exposed via docker-proxy e.g.
docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip
10.0.43.4 -container-port 9090
docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1443 -container-ip
10.0.43.4 -container-port 1443
and the issue is that the docker-proxy process simply dies, showing
[docker] <defunct>
The hosts show a log of open network ports due to the health check traffic -
TCP: 45728 (estab 16, closed 45685, orphaned 0, synrecv 0, timewait
45233/0), ports 0
Not terribly so, but enough to make me wonder if I'm hitting a resource limit.
There is nothing obvious in journald, related to Docker or anything
about resource starvation, but I'm new enough to CoreOS/Journald that
I wonder if I'm not looking in all the correct places. Anyone have
advice on helping track down the issue?
thanks!
thorsten
Loading...