开发者

Jython(WLST)/Python Communication

开发者 https://www.devze.com 2023-04-03 21:58 出处:网络
I have a want to make a jython and python communcation link. I have a django app and python scripts I use for a front end and do system admin/automation tasks. I use jython for Weblogic 9/10. The thin

I have a want to make a jython and python communcation link. I have a django app and python scripts I use for a front end and do system admin/automation tasks. I use jython for Weblogic 9/10. The thing I want to do is make it so that I can give the jython system a request to do. Such as task A with args a,b,c and then return back a message when it was done.

I want to do this because wlst or jyth开发者_运维知识库on is slow to start and it becomes a pain to do when I need to do a deploy, or check the status of a server or servers(up to 100 right now). So which would be the easiest way to share information back to the main script or python class while keeping the jython/(wlst) system alive and can easily share / make requests?

The way I have been doing it is using the pickle object. By getting all the data, spitting it out to a file, then loading the file back into the python app/script.


Have you considdered Celery or some other standard Queue/Broker messaging system? django-celery is quite mature and well developed and specifically designed for this kind of task.

Django -> Celery --> Worker Process (always running)
           ^     |-> Worker Process
           |     `-> Worker Process -,
           \______ Job Complete _____/

The basic idea is that you have always-running worker processes(on 1 or more servers) waiting for messages to come in. (these can be pickled objects or json or whatever you want). These processes idle waiting for Celery (and it's RabbitMQ backend) to dish out a message/job to them. Once the message/job is processed, notification comes back through the broker and calls a callback in django in which you update status.

Celery is a task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.

The execution units, called tasks, are executed concurrently on a single or more worker servers. Tasks can execute asynchronously (in the background) or synchronously (wait until ready).


For this kind of messaging I like to use a real language-agnostic message queueing system that can be used over and over again in future projects. Look at AMQP if you can handle having a message-queue broker in the middle managing all the queues. Or if you don't want a 3rd party broker then look at ZeroMQ.

In both cases you can send messages using a sub-pub queue that can handle several workers per queue if needed. The messages can either be simple text strings http://tnetstrings.org/ or they can be JSON objects or, you might even be able to send pickled Python objects along with code to execute if you are careful. Personally I like using JSON objects (a subset of JSON) and unpack them into Python dicts in order to use them.

I have used both AMQP and ZeroMQ in systems with around 20 communicating Python processes. It works well, and if you need to connect to something non-Python, you will find that there is already an AMQP module and a ZeroMQ library out there.

An interesting extension of your scenario is to have 3 kinds of worker processes written in Jython, CPython and IronPython. That way you can leverage 3rd party Java and .NET modules as well as binary CPython modules like lxml. Combine it with something like Redis so that the processes are completely decoupled and can run on multiple servers if necessary. The workers would put their results into Redis instead of gumming up the message queuing system with big messages interleaved with small ones. If necessary a worker can publish a message containing a Redis key so that another process can retrieve the value.


Pickling is fine, use cPickle for efficiency. However you should not write it to a file. Rather use some other IPC mechanisms, like sockets or pipes (i.e. see https://stackoverflow.com/search?q=python+named+pipes) that avoid the disk-overhead.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号