开发者

Persistent push with comet long-polling on Jetty?

开发者 https://www.devze.com 2023-04-04 09:45 出处:网络
I am trying to create a Jetty servlet that allows clients (web browsers, Java clients, ...) to get broadcast notifications from the web server.

I am trying to create a Jetty servlet that allows clients (web browsers, Java clients, ...) to get broadcast notifications from the web server.

The notifications should be sent in a JSON format.

My first idea was to make the client send a long-polling request, and the server respond when a notification is available using Jetty's Continuation API, then repeat.

The problem with this approach is that I am 开发者_StackOverflow中文版missing all the notifications that happen between 2 requests.

The only solution I found for this, is to buffer the Events on the server and use a timestamp mechanism to retransmit missed notifications, which works, but seems pretty heavy for what it does...

Any idea on how I could solve this problem more elegantly?

Thanks!


HTTP Streaming is most definitely a better solution than HTTP long-polling. WebSockets are an even better solution.

WebSockets offer the first standardised bi-directional full-duplex solution for realtime communication on the Web between any client (it doesn't have to be a web browser) and server. IMHO WebSockets are the way to go since they are a technology that will continue to be developed, supported and in demand and will only grow in usage and popularity. They're also super-cool :)

There appear to be a few WebSocket clients for Java and Jetty also supports WebSockets.


Sorry for bumping this up, yet I believe numerous people will come across this thread and the accepted answer, IMHO, is at least outdated, not to say misleading.

In order of priority I'd put it as following:

1) WebSockets is the solution nowadays. I've personally had the experience of introducing WebSockets in enterprise oriented applications. All of the major browsers (Chrome, Firefox, IE - in alphbetical order :)) support WebSockets natively. All major servers/servlets (IIS, Tomcat, Jetty) are the same and there are quite a number of frameworks in Java implementing JSR 356 APIs. There is a problem with proxies, especially in cloud deployment. Yet there is a high awareness of the WebSockets requirements, so NginX supported them already 1.5 year ago. Anyway, secured 'wss' protocol solves proxy problem in 99.9% (not 100% just to be on the safe side, never experienced that myself).

2) Long Polling is probably the second best solution, and the 'probably' part is due to 'short polling' alternative. When speaking of long polling I mean the repeated request from client to server, which responses as soon as any data available. Thus, one poll can finish in a few millis, another one - till the max wait time. Be sure to limit the poll time to something lesser than 2mins since usually otherwise you'll need to manage timeout error in you client side. I'd propose to limit the poll time to something like tens of seconds. To be sure, once poll finished (timely or before that) it is immediately repeated (yet better to establish some simple protocol and give to your server a chance to say to the client - 'suspend'). Cons of the long polling, which IMHO justifies the continuation of the list, is that it holds one of just a few (4, 8? still not that many) allowed connections, that browser allows each page to establish to a server. So that is can eat up ~12% to ~25% of your website's client traffic resource.

3) Short polling not that much loved by many, but sometimes i prefer it. The main Cons of this one, is, of course, the high load on the browser and the server while establishing new connections. Yet, i believe that if connections pools are used properly, that overhead much lesser than it look like on the first glance.

4) HTTP streaming, either it be page streaming via IFrame or XHR streaming, is, IMHO, highly BAD solution since it's like an accumulation of Cons of all the rest and more:

  • you'll hold the connections opened (resources of browser and server);

  • you'll still be eating up from total available client traffic limit;

  • most evil: you'll need to design/implement (or reuse design/implementation) the actual content delivery in order to be able to differentiate the new content from the old one (be it in pushing scripts, oh my! or tracking the length of the accumulated content). Please, don't do this.

Update (20/02/2019)

If WebSockets are not an option - Server Sent Events is the second best option IMHO - effectively browsers implemented HTTP streaming for you here at the lower level.


I have done this before using Http Streaming via Atmosphere framework and it worked fine.

Visit Comet, Streaming

if you see the atmosphere tutorial they have given multiple examples


You may want to check how they implemented this in CometD: http://cometd.org . Or you may even consider to use that tool, without having to reinvent the wheel.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号