hi , f-stack team
here I have ported an application to f-stack. It is :
1 single process with one thread ( thread is to collect data and no ff_api in it )
2 it’s a client app which send tcp connection to a nginx server.
And if I run this app on one cpu core , there is no problem. the command is below:
./myapp --tcp=192.168.0.112:10000 --conf=../../config.ini --proc-type=primary --proc-id=0
and the lcore_mask in file config.ini is set to 1 :
[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=1
However, when I want to run my app with multi process on multi cores , there’s some error.
I want run first myapp on core 0 and second myapp on core 1.
The command is :
./myapp --tcp=192.168.0.112:10000 --conf=../../config.ini --proc-type=primary --proc-id=0
./myapp --tcp=192.168.0.112:10000 --conf=../../config.ini --proc-type=secondary --proc-id=1
And the file config.ini :
[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=3
And both of these two apps run and us is up to 100% using top command.
But I found that some tcp connections were time out. And there were about half of these tcp connections failed because of timeout . Half of tcp connections were successful. Then I captured packets on the server . I found that after server sends syn+ack packets to client , Myapp just can receive about half syn+ack packets and then send ack to server to establish the tcp connection . But the other syn+ack packets sent from server could not be received by client . So myapp would always send first syn packet to server and the connection couldn’t be established .
AFAK, if a server app ported on F-stack , the RSS will hash the five tuple from client to process. And then these packets from same tcp flow would be always sent to same process . But if a client app ported on F-stack , client would send packets out to server to establish tcp connections. However , RSS maybe not hash the packets from server to the same process. For example , process 1 on core 1 ,process 2 on core 2 ,both on f-stack as clients :
- process 1 send syn to server ,
- server send syn+ack to client.
- RSS hash syn+ack packet to process 2 . But process 1 and 2 is different process and has different stack ,share nothing . So process 2 would do nothing about this packet.
- process 1 can’t receive sys+ack and then send syn again and agiain . The tcp connection will not be established forever.
I don’t know if my understanding is right and I guess this is the reason. So how to guarantee the packet from server would be sent to a specific process . I think maybe RSS couldn’t do this mission. Is there another way ? How the nginx proxy realized ?
I would appreciate it if you could offer any help.
hi , f-stack team
here I have ported an application to f-stack. It is :
1 single process with one thread ( thread is to collect data and no ff_api in it )
2 it’s a client app which send tcp connection to a nginx server.
And if I run this app on one cpu core , there is no problem. the command is below:
and the
lcore_maskin fileconfig.iniis set to 1 :[dpdk] # Hexadecimal bitmask of cores to run on. lcore_mask=1However, when I want to run my app with multi process on multi cores , there’s some error.
I want run first myapp on core 0 and second myapp on core 1.
The command is :
And the file
config.ini:[dpdk] # Hexadecimal bitmask of cores to run on. lcore_mask=3And both of these two apps run and
usis up to 100% usingtopcommand.But I found that some tcp connections were time out. And there were about half of these tcp connections failed because of timeout . Half of tcp connections were successful. Then I captured packets on the server . I found that after server sends syn+ack packets to client , Myapp just can receive about half syn+ack packets and then send ack to server to establish the tcp connection . But the other syn+ack packets sent from server could not be received by client . So myapp would always send first syn packet to server and the connection couldn’t be established .
AFAK, if a server app ported on F-stack , the RSS will hash the five tuple from client to process. And then these packets from same tcp flow would be always sent to same process . But if a client app ported on F-stack , client would send packets out to server to establish tcp connections. However , RSS maybe not hash the packets from server to the same process. For example , process 1 on core 1 ,process 2 on core 2 ,both on f-stack as clients :
I don’t know if my understanding is right and I guess this is the reason. So how to guarantee the packet from server would be sent to a specific process . I think maybe RSS couldn’t do this mission. Is there another way ? How the nginx proxy realized ?
I would appreciate it if you could offer any help.