by Bradley Knockel (last modified Apr. 2021)
Let me tell you about volunteer computing. I had some old (and newer) laptops and mobile devices lying around, and I now use them to cure diseases, find black holes, and solve difficult math problems. But I don't do a thing! I just install Folding@home and/or BOINC, which take care of most everything for me! Your computer has both a CPU and GPU, and this software uses them to do calculations, then it uses an Internet connection to send back the results. By default, calculations will only run when plugged in (not on battery power) and will not use mobile data (only WiFi or Ethernet). Processes are run with low priority to cause minimal interference with computer performance, and several settings exist to optionally restrict the software further.
In 2019, the Summit supercomputer was built (using RISC CPUs and Nvidia-Tesla GPUs) becoming the world's most powerful supercomputer. Folding@home was approaching the power of Summit and has far surpassed it due to influx of new users due to COVID-19 outbreak! BOINC was always less popular. Combining the efforts of many computers is called distributed computing, and, now that there are vast numbers of devices sitting around, let's put them to use! Getting access to supercomputers is difficult due to high demand, especially now that CPUs are no longer obeying Moore's law, so let's meet the demand!
If you have any iOS device (iPhone or iPad), you cannot use Folding@home or BOINC. Same for a Chromebook, though there are rumors of tricky ways to get BOINC to work on a Chromebook (good luck). By the way, most tech people prefer Android and real laptops over iOS and Chromebooks.
Folding@home software is very easy to use, it goes to a very important cause (curing, preventing, and treating diseases), and good scientific progress is being made. I highly recommend you do this before trying BOINC. Here is some basic info on what it does.
Folding@home is perfect for any desktop or laptop. In your computer's power settings, have it never go to sleep when plugged in (depending on your operating system, you might need to keep your laptop lid open when it charges). By default, Folding@home will not run on battery power, which is great. Even though RISC (ARM) CPUs are the future, x86 (Intel and AMD) CPUs are the present, so Folding@home should run on any x86-64 Windows, macOS, or Linux computer.
I found the documentation to be lacking and rarely updated. Below are some technical notes I have found to be quite useful. Just read the first half if you aren't a technical person.
Don't let work units reach timeout date. On timeout date, the work unit will be sent to someone else while it continues to run on your computer until its expiration date, causing a large waste of work and a delay for the entire project. Returning work units in a timely manner is crucial because the next batch of work units depends on the results of the previous batch. For this reason, you get more points if you finish earlier. If you see that many of your work units are timing out, please uninstall the software. If you plan on shutting down your computer for a long time, a few days before you will shut it down, set the work unit to "Finish".
The default installation requires you to login to your computer for Folding@home to run. Don't worry, switching users or the "lock screen" are not problems after you have logged in. What this means is, if your computer restarts without automatically logging in (or if there is a long-enough power outage), your computer will pause doing calculations until you login again.
After hitting the "Finish" button then having the work unit finish, I was annoyed to discover that, upon restarting my computer, a new work unit was downloaded. It turns out that, regardless of any "Pause" settings, when Folding@home starts up, it starts folding. It turns out that there is an "Expert" pause-on-start setting that can stop this behavior, but then it always starts on pause regardless of if you are on the "Fold" setting! However, this "Expert" setting is useful if you want to pause folding for a few weeks without having to uninstall the software.
Folding@home recommends to use the default "medium" power setting, which uses 75% of your logical CPU cores and allows for GPU processing. I recommend this too unless you find for some reason that your computer is running too hot. On my primary computer that I actually care about, I set it to "light" (between 25% and 50% of logical cores) to prevent the fan from running too loud and possibly breaking the fan many years from now. On a computer with just 2 physical cores (no hyper-threading), I use "full" setting to be sure that I use 100% of the cores.
2020-04 Update: due to tight deadlines while working on COVID-19, "light" just barely gave my computer's CPU enough time to finish! I needed "medium" and never letting my computer go to sleep. I also had a very slow computer that could not meet deadlines. I don't have any fancy GPUs, so maybe these deadlines are reasonable with GPUs.
None of my GPUs are powerful enough to be on the GPU whitelist. If your GPU is on this list, I highly recommend you keep allowing the software to use it! For certain types of calculations such as what Folding does, a GPU can do far faster calculations than using just a CPU. A GPU still requires the computer's brain, the CPU, to work. All Folding calculations can be done using a GPU, but that doesn't mean that all work units are designed for GPUs because: CPU-only programming is much easier, CPUs without GPUs still make a difference, and powerful GPUs can use a lot of electricity. To get a GPU to always run, use "full" setting, and, to get a GPU to run on idle, you need "medium". If "full" setting noticeably affects computer performance, reduce it! Great software is FAHBench to benchmark the CPU and GPU of your computer!
I recommend the default "While I'm working" setting (not "Only when idle"). The software runs at low priority causing it to basically only use the leftover CPU power, so I've never seen a drop in computer performance at "medium" setting. If you do "Only when idle", you will lose work as it reverts to the most recent checkpoint: every 1% or, for slow machines, every 15 minutes. The 15 minutes can be set to as low as 3 minutes, but saving checkpoints wastes resources, so I don't recommend it.
For some reason, it seems that the first CPU work unit sent to your computer will run with just 1 logical core regardless of your settings. Also, if a power setting allows for only 1 CPU core to run, you may get a work unit that does not allow parallel computing between multiple cores, so increasing the power setting will make no difference for just that work unit.
I much prefer the "Advanced Control" (aka "FAHControl") instead of the "web control". You can see much more information and access more settings beyond the basic power slider.
From what I can tell, TPF is the time between 1% checkpoints?
To combine points of many computers, make sure to set your identity (name and team) to the exact same thing on each computer.
Speaking of earning points, Folding@home is so successful partly because of their clever marketing campaign. For example, you can select a cause/disease to work on (though your choice does not guarantee that all your resources will go towards the cause). Of course it doesn't matter what you choose because people who don't choose will get more work for the causes that were least chosen. I recommend that you do not choose a cause in case not choosing allows the experts more freedom in choosing what to work on. But it makes people feel good to choose, and it doesn't hurt anything for them to "choose". This just goes to show that successfully communicating science requires the use of emotional tricks. Makes me wonder if all logical actions result from emotional tricks perhaps from our own brains.
By far, the things that get damaged the most on a computer are the mechanical parts (keyboards, fans, hard disks, etc.) or are the results of mechanical damage (dropping, spilling, etc.). Batteries can only do so many charge cycles, but, by default, volunteer computing does not use the battery. SSDs (unlike HDDs) are not mechanical, but they also can only do so many writes, but nearly all (all?) current projects have a low write rate to storage drive. The electrical logic circuits are typically just fine when a computer reaches its end of life, especially if you use them safely, so why not put them to use?
The real cost can be electricity, which, for an x86 CPU running 24-7 over decades, is less than cost of the computer, so why not get your money's worth by actually using your computer! This energy is actually helpful over the winter when the heat warms your home! And, for ARM CPUs in smaller devices, electricity is so tiny it doesn't matter.
You may notice slight performance loss, but the software is designed to work in the background to reduce this (especially with the right settings!).
High processor temperature can affect life of CPU/GPU, but lack of temperature fluctuations might actually help! To be safe, look up the safe temperatures for your processors then figure out how to monitor the core temperature of your processor: download Speccy on Windows, use Intel Power Gadget or Macs Fan Control on macOS, and, to see battery temperature on Android, get the CPU-Z app. Make sure that any fans are not blocked and that any charger (or any processor that doesn't have a fan) is exposed to the open air. If you can't remember the last time you dusted out the inside of your computer, dust it! There are many opinions and overgeneralizations about ideal temperatures on the Internet, but here is some good info. I ideally prefer to stay 20 C° below listed hardware max. None of my computers will go above 10 C° below, which is fine for my old half-broken devices.
For some phones, there might be a slight effect on battery life due to "mini cycles". I cannot find reliable information on this. Laptops certainly do not have this issue.
For computers, the fans can wear out!
If you draw a lot of power by simultaneously charging external devices via USB, the PSU (for example, your laptop's AC adapter) can wear out over time, especially if fans or airflow are not sufficient.
At first, I was of the mindset that, since computers use a good amount of electrical power while idling, the most efficient choice was to run the computer as hard as possible without reaching high temperatures or noticeably affecting usability. I then started noticing that all of my devices got more work done per core when I ran fewer CPU cores. I believe this to be caused by resource sharing between the CPU cores...
- All cores have to communicate with the same RAM (and sometimes cores will share an L2 cache).
- For low-end AMD CPUs, a single FPU might be shared between cores (most CPUs have an FPU for each core).
- For many x86 CPUs, hyper-threading causes two threads to fight for resources in a single physical core.
Depending on the work, sharing can be great because not all threads need to be using the exact same resource at the same time. But some work will want to use the RAM or FPU as much as possible. In addition, more power per calculation is needed...
• Running extra cores can cause fans to turn on that are now drawing extra power.
• A processor at a higher temperature draws extra power due to more leakage current.
• The computer trying to coordinate all the sharing also uses more power.
I now try to err on the side of conserving computational resources, especially when I consider how some BOINC applications always want the same work to be completed by at least two people. Why would I risk harm to my bank account to get a 10% increase in total work output when the code and project administration may have any number of inefficiencies that could be increasing run time by 100% or even 1000%! For BOINC, running fewer tasks at a time means finishing individual tasks faster, using less RAM, and losing less progress when BOINC is restarted. Certainly never over-volt your processor!
We need to experiment on each computer for different types of work while monitoring CPU usage and temperature. For Folding@home, do this by timing the time between checkpoints for a given work unit to see if it scales as expected. For BOINC, I run a bunch of tasks, then average together ones from the same application. Note that, for BOINC, "run time" will not be much larger than "CPU time" when resource sharing is occurring (these will only differ by a lot when you are running more threads than logical cores). For resource sharing, "run time" and "CPU time" both increase, though I have noticed that sometimes "run time" increases a bit more than "CPU time".
The main idea of volunteer computing is to put your current devices to use, but, if you really like volunteer computing and want to buy a machine for it, get computationally-powerful energy-efficient Nvidia GPUs! As ARM becomes more popular, get an ARM workstation (Windows or Linux) or a cluster of single-board computers that can run Linux! Especially now that Nvidia is buying ARM, you can hopefully soon get a workstation that has a ton of ARM cores and Nvidia GPUs!
Folding@home currently only runs on x86, so my ARM Android phone and Raspberry Pi cannot be used! Since ARM CPUs are extremely efficient, I very much want to use them! Also, Folding@home will not run on my Intel GPUs (though it runs on some Intel GPUs). GPUs are very fast, so I very much want to use them! This is where BOINC comes in. After using Folding@home for a couple years, I have now completely switched to BOINC because I don't have the fancy GPUs that Folding@home wants.
Folding@home is designed to get the most from the fastest computers. BOINC on the other hand has a million different projects, options, and possibilities. BOINC is the Wild West of volunteer computing! There are many BOINC projects because BOINC is software that anyone can use to create a project. Do to the freedom of BOINC, I always do research to make sure that a project is not a waste of time. Unlike Folding@home, some BOINC projects or sub-projects can require a lot of RAM or RAM speed.
Currently, there are only 3 projects that can use (Gen7 or newer) Intel GPUs: Collatz Conjecture, Einstein@home, and Minecraft@Home.
I would never contribute to Collatz Conjecture after I discovered that the algorithm is invalid. I like and contribute to Einstein@home, but it currently only has "opencl-intel_gpu" application versions for Windows. I've never played Minecraft, so that's a no for me lol.
World Community Grid also "has work" for Intel GPUs, but you'll wait a week before you get any because there isn't enough to go around, so I'd rather not make the problem worse by contributing my GPU to that project (also, the GPU tasks would end with an error after about 12 hours on one of my Intel GPUs).
There are many options for my ARM devices. I am only interested in projects that are CPU-only. I would rather my CPU power go to projects that cannot use GPUs. Even so, there are still plenty of projects to choose from. In general, I prefer Rosetta@home and World Community Grid (WCG) because they are well administered and help treat and cure diseases. I must warn that Rosetta@home can use a lot of RAM, which, if your computer starts swapping memory to your storage drive, will hurt computer performance and will slightly reduce the life of your storage drive. A WCG sub-project, OpenPandemics, uses GPUs, so I choose to not run OpenPandemics on CPUs (or GPUs because there isn't enough work for everyone).
To use my Android phone while it charges overnight, I prefer WCG. Upon suspending when I unplug my phone, only a small amount of work is lost due to tasks making frequent checkpoints (WCG has multiple checkpoints and short runtime). Rosetta@home uses a lot of RAM. When there are no available WCG Android tasks, I recommend Universe@home (set Universe@home "Resource share" to 0 to only get tasks when no work is available from any non-zero projects).
To use my Raspberry Pi, I need projects that support "Linux on ARM", a category that includes other single-board computers similar to the Raspberry Pi. I will assume Raspberry Pi OS is your OS. I prefer WCG (the OpenPandemics sub-project works!). I also recommend Universe@home (set Universe@home "Resource share" to 0 to only get tasks when no work is available from any non-zero projects). Raspberry Pi OS is currently only 32-bit (though 64-bit has a beta version, which can be found in the raspios arm64 folders here!), so this could limit you if you want to use Raspberry Pi OS. Also, my Pi 3 only has 1 GB of RAM, so I cannot do projects like Rosetta@home (Rosetta is also only for 64-bit Linux on ARM). When I tried 64-bit Raspberry Pi OS, WCG and Universe@home stopped working, but this fixed it! In order to edit cc_config.xml, I did `sudo nano /var/lib/boinc/cc_config.xml`, then I had to run `sudo systemctl restart boinc-client`.
If you want to track your earned credits across projects, use the same email address for each project.
On your BOINC Manager, use the Advanced View (not the Simple View) for much more useful information and options!
BOINC projects offer many settings (the info at this link is great!). Some settings are computing preferences and others are project preferences.
I see no good reason to worry about setting computing preferences at project websites. Just set them uniquely on each computer using BOINC Manager...
• For "Use at most __% of the CPUs" setting, I set this differently for each computer depending on computer's usage, temperature, and resource sharing such as hyper-threading. You simply have to experiment a bit with each computer to see how much work is done at what power and temperature cost. From what I can tell, when rounding a decimal, round the final digit up (for example, to run 5 out of 6 cores, enter 83.4, not 83.3).
• I uncheck the "Suspend when non-BOINC CPU usage is above ___" option because BOINC runs at low priority and because I sometimes run Folding@home alongside. If I ever noticed an issue, I would use this setting. An alternate solution is setting daily schedules, which can also be handy to avoid peak-hour electricity charges (note that Folding@home does not have daily schedule settings).
• On computers with nicer GPUs (even Intel GPUs), I do not need the "Suspend GPU computing when computer is in use" setting. On lesser computers, performance is noticeably affected by BOINC running the GPU.
• If doing CPU-only tasks, set "Leave non-GPU tasks in memory while suspended" to prevent wasted work. Sadly, this setting does not exist on Android.
• For BOINC memory usage, I want the "in use" to match the "not in use" so that I don't risk losing progress on tasks every time I quickly use a computer. I usually decrease "When computer is not in use, use at most ___% memory" to match the default "in use" setting of 50% (of total physical memory). If I had a computer with lots of RAM, I would set both values to 90%, especially if it's running Linux and rarely used by people. You really shouldn't be depending on these settings; your choice of projects and project preferences should not exceed your available RAM.
• I want GPU tasks to have slightly higher CPU priority than CPU-only tasks so that the CPU is not limiting the GPU! The default settings on Windows are perfect: GPU tasks run at "below normal" priority, and BOINC and Folding@home CPU-only tasks run at "low" priority. (You can probably change this via the cc_config.xml file.) I also set Resource Share high on my GPU projects to ensure that there is always a GPU task running. You can also have BOINC not run 100% of the CPUs, but there is a better way of doing this (see my section on app_config.xml)!
Regarding project preferences...
• For projects that can use my GPUs, I turn off CPU-only tasks.
• For any project that will let me, I allow beta (test) tasks because I trust the projects to know what's best more than me, and I don't want to become one of those people who only cares about credits.
• For some projects (the final section here has a list of projects), I have to change various settings to allow them to publicly export data, publicly display data, link devices, etc. This allows me to track my progress across projects and to let other people see basic info about my computers.
• To halt a computer remotely, I reserve a location, specifically home, to have project settings that prevent computers from getting work (for me, changing computing settings won't do anything because I set each computer locally). Then, if I want to stop work remotely, I can simply move computers to home! The computer will finish current tasks, then that's it! Sadly, not all projects have settings that let you do this even if you are trying to be very clever.
• To have a computer with more resources run extra sub-projects, I reserve a location, specifically work, to get these sub-projects. For example, for WCG, I have a computer that can do the Africa Rainfall Project, so I move that computer to work!
• To allow a device to only run a project when other projects run out of work, I reserve a location, specifically school, to have "Resource share" set to 0. For example, Universe@home is a great backup project!
On Android, there are more considerations (in addition to there being no "Leave non-GPU tasks in memory while suspended" option)...
• First of all, you need to install the app called "BOINC" made by U.C. Berkeley.
• The app doesn't list all main projects that support Android, but, once you add your first project, there's an "add project by URL" option.
• ARM CPUs are very energy efficient, but a phone has no fans and a thick case without airflow can cause your phone to overheat. BOINC will suspend before your phone turns off, but let's avoid this situation! By using the CPU-Z app, I have found that my phone's battery temperature is increased about 1 C° when using a thin case and another degree when I don't prop it up a bit to allow air get to its back (and I have never believed in using inductive charging). Also, I have found that it will be another degree cooler if you prop in up with the back facing up. Perhaps the best is to sit the phone flat on a metal table; I haven't measured it yet, but the phone always feels cool!
• A complication is that Android sometimes won't let a process use all CPU cores. I tested an old Kindle by comparing "run time" and "CPU time" on completed tasks, and I found that it used 2 of the 4 cores when run as background or as foreground (if I set BOINC to use more than 2 cores, the "run time" became significantly longer than "CPU time"). I tested my 4+4 core phone by using adb shell, and I found that, as a background process, only 4 little cores can be used. As a foreground process, 3 little cores and 4 big cores could be used (little cores are preferred when running less than 7 tasks, but an 8th task would hop between little and big cores). I tried a lot of apps to see details of CPU usage, and, on my non-rooted phone (not Kindle), CPU Monitor is the only one that "worked", though I quickly deleted it because it would always run in the background. I recommend to set your device to use the number of cores that will actually be working else (1) you will lose more work each time you unplug the device, (2) you will slow the project's processing of individual workunits, and (3) you will use more RAM. Interestingly, if set to 7 tasks as foreground (using 4 big cores), it seems to run at about the same battery temperature as just using 1 big core. From what I can tell, Android will throttle the big cores for various complicated reasons.
• A related complication is that BOINC can run as a foreground process if you open the app too quickly after the phone restarts. To easily get a sense for how long BOINC takes to start, have your phone in a "BOINC friendly" state (plugged in, over 90% charged, etc.), then restart your phone and wait for the little "Computing" notification to appear! On my phone set to run 4 of 8 cores, running in foreground causes 3 little cores and 1 big core to run instead of 4 little cores. Using a big core causes my battery temperature to go too high for my comfort.
• In 2020, the "no new tasks" setting is initially somewhat hidden. Don't go to the "Projects" tab, but go to the tab of the actual project then start tapping around. After you use it once, the setting appears in all the expected places.
• Using the GPU (Mali or Adreno) is not an option for any known projects. Projects may never use GPUs due to heat.
• So that I can check my phone without all my tasks going back to their most recent checkpoint, I set "Don't require screen off" and I change my "max other-CPU" from 50% to 100%. These settings could be an issue if battery temperature goes over 40 °C, so I install the CPU-Z app to verify that battery temperature is safe.
• On my Samsung phone, something called "Device care" keeps complaining. I just ignore it because it doesn't understand that BOINC will suspend if things get too hot, suspend if I unplug the device, and only run when battery is more than 90% charged.
Using a Raspberry Pi, there are some considerations...
• Pis are great for using CPUs on BOINC! Pis are cheap, have very energy efficient ARM CPUs, automatically restart after any power outage, and runs BOINC immediately after a restart. For some projects, RAM can be a limitation, but the Pi 4 allows you to get up to 8 GB of RAM!
• Try not to touch exposed parts of the Pi, especially when it's on! Electrostatic discharge once caused almost all my tasks from any project to eventually say "Error while computing" for days until I shut down and unplugged the Pi (though maybe a simple restart could have fixed it).
• I recommend putting a heat sink on the Pi's processor, and orient the heatsink in a way to allow for vertical airflow when the Pi is placed on its side (you should place the Pi on its side!), but another orientation might be better if a fan is running horizontally like this. Even like this, running all cores will cause my Pi 3B to very slightly throttle the CPU depending on room temperature. To fix this, an area with a slight breeze reduces the temperature by about 10 C° (the same temperature drop as using 1 fewer core). Here is what the official Pi people have to say about temperatures. To measure CPU frequency for measuring throttling, use the `vcgencmd measure_clock arm` command.
• To install BOINC, run the command `sudo apt-get install boinc` (perhaps via SSH!). You can then run boinccmd, run the usual GUI (perhaps via VNC!), or, better yet, run `boincmgr &` via `ssh -Y email@example.com` (the -Y enables trusted X11 forwarding).
• On WCG, the Pi 4B seems to be almost twice as fast as the 3B using a similar amount of electrical power! On my 3B, I sometimes have to run 3 of the 4 (75%) CPUs to prevent overheating, and I limit RAM usage to 60% of the 1 GB of physical memory. In the winter, I can run all 4 CPUs after setting RAM usage to 70%. The `top -o %MEM` command was useful for figuring this out!
• Using the GPU will not likely be an option anytime soon. The main GPGPU interfaces are OpenCL (works on most GPUs), CUDA (only Nvidia GPUs), and now Apple's we-want-to-be-unique-and-not-work-with-anyone-else Metal. Anyway, OpenCL support for Pi certainly needs more work and may not bring much benefit.
You can create neat files called app_config.xml to do things like adjust gpu_usage and cpu_usage for GPU tasks! There is some great documentation on these files, and here is some necessary info when figuring out exactly where to put the files. Once you create the file, in BOINC Manager, do Options → Read Config Files. BOINC Manager may not immediately update some things, but the BOINC client is updated. I don't think Android can do this?
To maximize computation of GPU tasks, these files can adjust gpu_usage and cpu_usage. From what I can tell, when the numbers of several tasks don't add exactly to an integer, gpu_usage is conservative (0.33 will allow 3 tasks on a GPU), but cpu_usage is liberal (0.33 will allow 4 tasks on a single CPU core), but I'm not certain. You may want a gpu_usage less than 1.0 if your GPU can handle more tasks without burning up, but this can slow things down if the tasks fight, not just for computation time, but for GPU RAM. Instead of just reducing gpu_usage from 1.0, some projects have additional configuration files that you can use to better maximize computation!
I usually try to avoid making app_config.xml files, but here is an example of how I use them. When using my Intel GPUs at Einstein@home, the "Binary Radio Pulsar Search (Arecibo)" application only reserves 0.5 CPUs for each GPU task. This is strange because the other Intel-GPU application ("Gamma-ray pulsar binary search #1 on GPUs") reserves 1.0 CPU per task. Reserving less than 1.0 CPU causes 0.0 CPUs to be reserved, so a 4-core CPU will run 4 CPU-only tasks leaving no CPU power left to drive the GPU causing GPU tasks to take many times longer to finish. A fix is to create the following app_config.xml to reserve 1.0 CPU!
<app_config> <app> <name>einsteinbinary_BRP4</name> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>1.0</cpu_usage> </gpu_versions> </app> </app_config>
I found the app name here, and I put the file here: C:\ProgramData\BOINC\projects\einstein.phys.uwm.edu . Another advantage of doing this is that CPU-only tasks are no longer paused in the middle of work when a radio task finishes followed by a gamma-ray task (both applications now use the same amount number of CPUs). If I were running multiple GPU tasks at a time, I would not reserve 1.0 CPU for each because, for my tasks, "run time" is much larger than "CPU time", so each GPU task just needs a little bit of a CPU core. By the way, for radio tasks on my Intel GPUs, I might actually do better if I set both gpu_usage and cpu_usage to 0.5 because then the "run time" is slightly less than doubled (and two radio tasks would still use 1 CPU core), but I would rather err on the side of conserving my computer resources.
On a computer with an especially weak Intel GPU, I needed to do the "Suspend GPU computing when computer is in use" setting. But, whenever the computer was in use, an extra unwanted CPU-only task would start running (to be suspended once the computer was idle again)! To prevent this annoying behavior, I reduced the "Use at most __% of the CPUs" by 25% (this computer had 4 cores), and I used the following app_config.xml (note that 0.0 cpu_usage doesn't work)...
<app_config> <app> <name>einsteinbinary_BRP4</name> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>0.1</cpu_usage> </gpu_versions> </app> <app> <name>hsgamma_FGRPB1G</name> <max_concurrent>10</max_concurrent> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>0.1</cpu_usage> </gpu_versions> </app> </app_config>
An important thing to say, especially when running GPU tasks, is to experiment for yourself on your own computer. I have read at Einstein@home forums that some people (especially those with a CPU that hyper-threads a single physical CPU core into 2 logical cores) need to reserve 2 logical CPU cores for their GPUs to get best performance. I have also read at these forums that, unlike most other GPU projects, GPU tasks at Einstein require fast memory. I was playing around with a Dell Inspiron 11 3185 (with an AMD A9-9420e processor and a memory upgrade to 8 GB), and, if I tried to run even a single CPU-only task (for WCG), the Einstein gravitational-wave GPU tasks would run several times slower. I decided to not run any CPU-only tasks! As for why this occurred, I asked the Einstein forums! They said my low-end AMD APU only has one FPU between two physical CPU cores and may also have limited bandwidth to the memory that is shared between the CPU and GPU.
Another use of app_config.xml would be to set max_concurrent for rosetta and minirosetta at boinc.bakerlab.org_rosetta project to prevent issues of using too much RAM on certain computers (I found the application names here). Better yet, just set project_max_concurrent for the entire project.
<app_config> <project_max_concurrent>2</project_max_concurrent> </app_config>
Either way, you could then increase Rosetta@home's Resource Share to guarantee that this max becomes a minimum as well, so a fixed value (unless Rosetta runs out of tasks to give or you start a new project that gets all of BOINC's attention). Without using app_config.xml, I normally would never run two projects that are giving CPU-only tasks on the same device (or two GPU projects) because I hate how BOINC switches between projects mid-task based on long-term (weeks) processing time and taking deadlines into account. When BOINC switches mid-task, you either lose progress or fill up memory. I hate computers thinking for me like when I type "weather" into my browser and it takes me to "weather.com", which I did not type (though I must have typed it once before), so I have to start thinking about every damn thing I've ever typed and about how my computer is thinking about what I am thinking so that I can outmaneuver it (outmaneuver it by typing "weather " with an extra space or "weather" in another box), but the real fix is to go deep in browser settings and disable cruel autofill. I like control, which app_config.xml provides. Do exactly what I tell you, computer! But, for people who use computers imprecisely, BOINC defaults to reasonable things.
The world is a big place with many projects!
If you want a chance to earn some money from distributed computing and don't mind gimmicks, maybe look into Charity Engine. This project makes distributed computing accessible to any ethical company and donates to charities.
If you like gambling on things that aren't worthwhile, maybe look into mining cryptocurrency. On one hand, you can not mine cryptocurrency and use your computers to help solve problems in the world that will never need to be solved again, or, on the other hand, you can waste money on electricity and ASICs helping the black market.