Date
1 - 8 of 8
tcp-routing in Lattice
Jack Cai
I'm playing around with the tcp-routing feature in the latest Lattice
release. I started two node.js applications in the pushed image (listening on two ports), one mapped to an http route and the other to a tcp route. I can connect to the http route successfully in the browser, but when I try to connect to the tcp port in the browser, I got connection refused. It looks like the mapped public tcp port on 192.168.11.11 is not open at all. Any advice on how to diagnose this? Thanks in advance! Jack |
|
Atul Kshirsagar
Its possible that HAProxy was not properly configured. Can you provide output of `ltc status <app name>`? This will tell if tcp route has been configured for the app.
Some things you can try: 1) You can then try doing `ltc update --tcp-route externalport:containerport` and see if that fixes the problem (this will result in reconfiguring HAProxy again). 2) If that doesn't work too...then try vagrant reload to make sure all the processes in lattice brain are restarted to rule out the problem that HAproxy is not in bad state. |
|
Marco Nicosia
Hi Jack,
toggle quoted message
Show quoted text
In addition to Atul's suggestions, could you please give us the exact command lines which you used to launch the two apps? The CLI arguments are tricky, we may be able to see something about the way you've tried to configure the routes by looking at how you've launched the apps. -- Marco Nicosia Product Manager Pivotal Software, Inc. mnicosia(a)pivotal.io c: 650-796-2948 On Wed, Sep 9, 2015 at 2:32 PM, Jack Cai <greensight(a)gmail.com> wrote:
I'm playing around with the tcp-routing feature in the latest Lattice |
|
Jack Cai
Thanks Atul and Marco for your advice.
toggle quoted message
Show quoted text
Below is the command I used to push the docker image: * ltc create hello <docker-image> --ports 8888,8788 --http-routes hello:8888 --tcp-routes 8788:8788 --memory-mb=0 --timeout=10m --monitor-port=8888* After the push completed, it reported below: *...hello is now running.App is reachable at:192.168.11.11.xip.io:8788 <http://192.168.11.11.xip.io:8788>http://hello.192.168.11.11.xip.io <http://hello.192.168.11.11.xip.io>* I also tried to update the routes: * ltc update hello --http-routes hello:8888 --tcp-routes 8788:8788* If I do "ltc status hello", I see the below routes: *Instances 1/1Start Timeout 0DiskMB 0MemoryMB 0CPUWeight 100Ports 8788,8888Routes 192.168.11.11.xip.io:8788 <http://192.168.11.11.xip.io:8788> => 8788 hello.192.168.11.11.xip.io <http://hello.192.168.11.11.xip.io> => 8888* But when I visited http://192.168.11.11.xip.io:8788/, I got "Unable to connect", while I could visit http://hello.192.168.11.11.xip.io/ successfully. Below is the log I saw when doing "vagrant up" to bring up Lattice: *...==> default: stdin: is not a tty==> default: mkdir: created directory â/var/latticeâ==> default: mkdir: created directory â/var/lattice/setupâ==> default: Running provisioner: shell... default: Running: inline script==> default: stdin: is not a tty==> default: * Stopping web server lighttpd==> default: ...done.==> default: Installing cflinuxfs2 rootfs...==> default: done==> default: * Starting web server lighttpd==> default: ...done.==> default: Installing Lattice (v0.4.0) (Diego 0.1398.0) - Brain==> default: Finished Installing Lattice Brain (v0.4.0) (Diego 0.1398.0)!==> default: Installing Lattice (v0.4.0) (Diego 0.1398.0) - Lattice Cell==> default: Finished Installing Lattice Cell (v0.4.0) (Diego 0.1398.0)!==> default: bootstrap start/running==> default: Lattice is now installed and running.==> default: You may target it using: ltc target 192.168.11.11.xip.io <http://192.168.11.11.xip.io>* There is an error "stdin: is not a tty", and I don't see haproxy mentioned in the log. Maybe haproxy is not started at all? Jack On Wed, Sep 9, 2015 at 8:13 PM, Marco Nicosia <mnicosia(a)pivotal.io> wrote:
Hi Jack, |
|
Jack Cai
After ssh into the vagrant VM and digging into the processes/ports, I found
toggle quoted message
Show quoted text
out that in my previous attempt I was trying to map one additional port that was already occupied by garden (7777). Because of this conflict, haproxy gave up mapping all the ports. Once I changed 7777 to 17777, the issue went away. So the lesson-learn is to examine the ports that are already in use in the vagrant VM, and avoid using them. Jack On Thu, Sep 10, 2015 at 2:18 PM, Jack Cai <greensight(a)gmail.com> wrote:
Thanks Atul and Marco for your advice. |
|
Atul Kshirsagar
Great! Give us your feedback after you have played around with tcp routing.
|
|
Jack Cai
One thing I'm wondering is how to provide enough public "ports" for users
to map to. It seems the cloud provider need to provide multiple public IP to map the ports, otherwise they will soon run out of ports on the same IP. Any thoughts here? Jack On Thu, Sep 10, 2015 at 4:56 PM, Atul Kshirsagar <atul.kshirsagar(a)ge.com> wrote: Great! Give us your feedback after you have played around with tcp routing. |
|
Atul Kshirsagar
That's true. This is one of the limitations of pure tcp (layer 4) routing. We will be hit by scalability limits in terms of public IPs. However, per public IP we can theoretically provide 64K ports and so can potentially accommodate many apps with tcp routing requirement (provided these apps and their clients can work with non-standard ports).
One alternative to this is to use SNI based solution where host header is provided in initial TLS handshake by client, which will help router to decide how to route the connection request. However, this will be limited to be used only over TLS, which may not be a very big limitation :) |
|