开发者

Timing user tasks with seconds precision

开发者 https://www.devze.com 2023-03-20 18:58 出处:网络
I\'m building a website where I need to time users\' tasks, show them the time as it elapses and keep track of how long it took them to complete the task. The timer should be precise by the second, an

I'm building a website where I need to time users' tasks, show them the time as it elapses and keep track of how long it took them to complete the task. The timer should be precise by the second, and an entire task should take about 3-4 hrs top.

I should also prevent the user from forging the completion time (there is no money involved, so it's not really high-risk, but there is some risk).

Currently I use a Timestamp to keep track 开发者_如何转开发of when the user began, and at the same time, initialize a JS based timer, when the user finishes I get a notice, and I calculate the difference between current time and the beginning timestamp - this approach is no good, there is a few seconds difference between the user's timer and my time difference (i.e. the time I calculated it took the user to complete the task, note: this was only tested at my dev env., since I don't have any other env. yet..).

Two other approaches I considered are:

1. Relying entirely on client side timer (i.e. JS), and when the user completes the task - sending the time it took him encrypted (this way the user can't forge a start time). This doesn't seem very practical, since I can't figure out a way to generate a secret key at client side which will really be "secret".

2. Relying entirely on server side timer, and sending "ticks" every second. This seem like a lot of server side work comparing to the other two methods(machine, not human.. e.g. accessing the DB for every "tick" to get start time), and I'm also not sure it will be completely accurate.

EDIT:

Here's what's happening now in algorithm wording:

  1. User starts task - server sends user a task id and records start time at db, client side timer is initialized.
  2. User does task, his timer is running...
  3. User ends task, timer is stopped and user's answer and task id are sent to the server.
  4. Server retrieves start time (using received task id) and calculates how long it took user to complete task.

Problem - the time as calculated by server, and the time as displayed at client side are different.

Any insight will be much appreciated.


If I've understood correctly the problem is that the server and client times are slightly different, which they always will be.

So I'd slightly tweak your original sequence as follows:

  1. User starts task - server sends user a task id and records start time at db, client side timer is initialized.
  2. User client notifies server of client start time; recorded in DB alongside Server Start Time
  3. User does task, his timer is running...
  4. User ends task, timer is stopped and user's elapsed time, answer and task id are sent to the server.
  5. Upon receipt the server notes the incoming request time, retrieves start time calculates how long it took user to complete task for both server time (start/finish) and client times.
  6. Server ensures that the client value is within an acceptable range of the server verified time and uses the client time. If the client time is not within acceptable range (e.g. 30seconds) then use the server times as the figure.

There will be slight differences in time due to latency, server load, etc. so by using the client values it will be more accurate and just as secure, because these values are sanity checked.


To answer the comment:

You can only have one sort of accuracy, either accurate in terms of what the client/user sees, or accurate in terms of what the server knows. Anything coming from the client side could be tainted, so there has to be a compromise somewhere. You can minimise this by measurement and offsets, such that the end difference is within the same range as the start difference, using the server time, but it will never be 100% unchangeable. If it's really that much of an issue then store times with less accuracy.

If you really must have accuracy and reliability then the only way is to use the server time and periodically grab it via ajax for display and use a local timer to fill in the gaps with a sliding adjustment algorithm between actual and reported times.


I think this will work. Seems like you've got a synchronization issue and also a cryptography issue. My suggestion is to work around the synchronization issue in a way invisible to the user, while still preserving security.

Idea: Compute and display the ticks client side, but use cryptographic techniques to prevent the user from sending a forged time. As long as the user's reported time is close to the server's measured time, just use the user's time. Otherwise, claim forgery.

  1. Client asks server for a task.

  2. Server gets the current timestamp, and encrypts it with its own public key. This is sent back to the client along with the task (which can be plain text).

  3. The client works until they are finished. Ticks are recorded locally in JS.

  4. The client finishes and sends the server back its answer, the number of ticks it recored, and the encrypted timestamp the server first sent it.

  5. The server decrypts the timestamp, and compares it with the current local time to get a number of ticks.

  6. If the server's computed number of ticks is within some tolerance (say, 10 seconds, to be safe), the server accepts the user's reported time. Otherwise, it knows the time was forged.

Because the user's time is accepted (so long as it is within reason), the user never knows that the server time could be out of sync with their reported time. Since the time periods you're tracking are long, loosing a few seconds of accuracy doesn't seem like it will be an issue. The method requires only the encryption of a single timestamp, so it should be fast.


The only way to prevent cheating is not to trust the client at all, but simply to calculate the final time on the server as the time taken from before sending the task to the client to after receiving the result.

This also implies that the final time has to include some network transmission delays, as unfair as that might seem: if you try to compensate for them somehow, the client can always pretend to be suffering from more delays than it actually is.

What you can do, however, is try to ensure that the network delays won't come as a surprise to the user. Below is a simple approach which completely prevents cheating while ensuring, given some mild assumptions about clock rates and network delays, that the running time shown on the client side when the results are submitted should approximately match the final time calculated on the server:

  1. Client starts timer and requests task from server.
  2. Server records current time and sends task to client.
  3. User completes task.
  4. Client sends result to server and (optionally) stops timer.
  5. Server accepts result and subtracts timestamp saved in step 2 from current time to get final time.
  6. Server sends final time back to client.

The trick here is that the client clock is started before the task is requested from the server. Assuming that the one-way network transmission delay between steps 1 and 2 and steps 4 and 5 is approximately the same (and that the client and server clocks run at approximately the same rate, even if they're not in sync), the time from step 1 to 4 calculated at the client should match the time calculated on the server from step 2 to 5.

From a psychological viewpoint, it might even be a good idea to keep the client clock running past step 4 until the final time is received from the server. That way, when the running clock is replaced by the final time, the jump is likely to be backwards, making the user happier than if the time had jumped even slightly forwards.


The best way to prevent the client from faking the timestamp is simply to never let them have access to it. Use a timestamp generated by your server when the user starts. You could store this in the server's RAM, but it would probably be better to write this into the database. Then when the client completes the task, it lets the server know, which then writes the end timestamp into the database as well.

It seems like the important information you're needing here is the difference in start and end times, not the actual start and end times. And if those times are important, then you should definitely be using the a single device's time tracking mechanism, the server's time. Relying upon the client's time prevents them from being comparable to each other due to differences in time zones. Additionally, it's too easy for the end user to fudge their time (accidentally or intentionally).

Bottom Line: There is going to be some inaccuracy here. You must compromise when you need to satisfy so many requirements. Hopefully this solution will give you the best results.


Clock synchronization

This what you are looking for, WikiPedia explanation.

And here is the solution for JavaScript.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号