log in

vLHCathome-dev

If you would like to join this project you will need to get an invitation code first.
In order to get an invitation code, please check out this post.

This is a project that utilizes the CERN-developed CernVM virtual machine and the BOINC virtualization layer to harness volunteer cloud computing power for full-fledged LHC event physics simulation on volunteer computers.
This is just a BOINC server. Please visit the LHC@home information for more information. If you have any problems or questions please visit the Message Boards, Questions and Answers and FAQ.

Join vLHCathome-dev

  • Read our rules and policies
  • Create an account on this page, for this you will need an invitation code. In order to get an invitation code, please check out this post.
  • This project uses BOINC. If you're already running BOINC, select Add Project. If not, download BOINC.
  • When prompted, enter
    http://lhcathomedev.cern.ch/vLHCathome-dev/
  • If you're running a command-line version of BOINC, create an account first.
  • If you have any problems, get help here.

User of the day

User profile Profile Aleksander Parkitny
I'm living in Wroclaw, Poland, working as a junior ITS and Security Specialist. I'm counting for over 70 projects. I think that it is very importent...

News

CMS Servers up again
https://www.neowin.net/news/dirty-cow-flaw-lets-hackers-gain-control-of-linux-systems-every-single-time

YEP Linux is just the greatest and most secure OS ever 😎


.....I didn't do it.......and I never liked a Dirty Cow

(OK I won't restart the OS war)
24 Oct 2016, 15:50:09 UTC · Comment


Server Consolidation
As mentioned previously, we would like to consolidate the existing production servers (Sixtrack, vLHC and ATLAS) into a single service. We hope that by doing this we can improve the support and reduce the confusion. One benefit for all is that there will be a single forum so both us and our volunteer moderators can be more effective.

The transition will have two phases, commissioning and decommissioning. First a new server will be prepared with a similar configuration as this dev project but based on the Sixtrack DB. This is because they have the most users and 50% of the active users from vLHC and ATLAS are already there, hence it should minimize the impact. Once this new server is ready, it will be opened up for use in parallel with the existing three servers.

Next comes the decommissioning. For Sixtrack this should be straight forward, the URLs for the old host will be redirected to the new host. For vLHC and ATLAS things will be a little more complicated. Those users who are already registered with Sixtrack will be encouraged to move to the new server. For those who are not registered they can either register themselves and move or we can do a bulk registration. Tasks can then be stopped and the URLs redirected.

Finally there is the issue of credit. It should be possible to migrate the credit from the old servers to the new server. This can only really be done once the servers are no longer used. There is no time critical aspect, just until this is done, only the new credit will be seen.

Comments and feedback on this proposal are welcome.

P.S The dev project will stay around as it is.
5 Oct 2016, 12:56:22 UTC · Comment


Migration to SSL
The scheduler and web pages of this dev project are now also published on the URL:

https://lhcathome.cern.ch/vLHCathome-dev

Please detach and re-attach to this new URL with your BOINC clients. The old project server is still running, and also the file upload and validation daemons run on the old server for now.

Later on after a test period, we will redirect the old URL. Then we will proceed in a similar way with our production project.
16 Sep 2016, 14:18:50 UTC · Comment


Task Tracker
I have added a task tracker to the top left of the page so everyone can see what issues we know about, which one are being worked on and what is being done right now. It still needs populating with a few items. 3 Aug 2016, 10:08:09 UTC · Comment


Task and CPU limiter
The server has just been updated to add the feature that limits Tasks and CPUs per user. This limit can be controlled in the project preferences.

Together with my changes to the scheduler, per-project limits on jobs in progress and #CPUs should now work. But I haven't actually tested this. Laurence, please try it and tell me if it doesn't work.
-- David


Please post any feedback in this thread. 31 Jul 2016, 7:48:14 UTC · Comment

... more

News is available as an RSS feed   RSS