-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Another huge batch #1751
Another huge batch #1751
Conversation
Also added documentation
When solution is found also output the job it refers.
List of devices is a space separated list.
Also enforce usage of diff in genesis block for benchmark
Also enforce usage of diff in genesis block for benchmark
Simulation and benchmark are de-facto the same thing. User can test hashing speed using simulation. It's assumed difficulty 1 (roughly 4.3 GH/s) Note ! Data exposed by benchmark (now removed) were also wrong as the "min" and "max" values exposed were in reality the first and the last collected value.
This reverts commit 885d8e0.
Allocating the (current ~3GB) DAG needs 5 sec in my environment. This patch reuses the already allocated memory if possible.
* some optimization using lop3 * use ROL8/ROR8 for some cases
This reverts commit 6ecadec.
This reverts commit b1167b1.
Use single strand from farm instead of having multiple strands (one per miner) which may collide)
A wasted solution is a solution found which could not be sent to pool due to lack of connection.
Compute max and mean. Tidy log.
{ | ||
Guard l(m_activeConnectionMutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removed ? Preventing another API call modifying connections the same time or switching to next failoverpool while we change connections
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a test commit nevertheless all mutexes are not necessary as the API lives in the same thread. There is only one thread for all which is not a miner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other words ethminer threads are (1 + device number)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory (correct me if I'm wrong)
- there could be more than one API connection
- API could "half" added the connection (means: size of URI vectors increased but URI content not copied) while poolmanager tries to use next (not fully filled) connection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All actions from any number of API connections are executed serialized by io_service.
There is no concurrency to worry about. There's to worry about asynchrony. What can happen is that a call (eg. Setactiveconnection) does not complete it's whole cycle before another call kicks in
@@ -247,10 +247,11 @@ void PoolManager::stop() | |||
DEV_BUILD_LOG_PROGRAMFLOW(cnote, "PoolManager::stop() end"); | |||
} | |||
|
|||
void PoolManager::addConnection(URI& conn) | |||
void PoolManager::addConnection(URI& _conn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be enough to change function to void PoolManager::addConnection(URI conn) {
(just removed reference) ?
Could we clang-format (now all sources - see #1712 ) before merge ? |
Counting solutions: |
Again. No need to guard increments as they come from a single serialized strand. |
Can you rebase your commits with master branch? |
@naikmyeong this is master branch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As consequence I'll remove adapted documentation from #1743
A lot of rework
This PR is the result of voluntary rework on many parts of the code to deduplicate as much as possible and give common functions some more usability
Api Interface
I've managed to unify json socket api interface with http interface. This helps to keep only one listening endpoint and removes the dependency from MONGOOSE external project.
The implementation can be now easily extended to support all API methods with GET and POST http verbs. (original idea from #1163)
Due to the recently inserted device abstraction we're now able to expose PCI id in both API calls and Http stats output along with the mode the device has been subscribed (cuda/openCL). (see #1012)
Note in this sample the power drain is not displayed as Nvidia Gtx 1050 Ti does not allow this feature but for other GPUs the power drain is reported correctly if proper
-HWMON
value is set.To get the Http info page you have to point to the very same endpoint of your API interface. The page auto-refreshes every 10 seconds.
The API call
miner_getstatHr
have been dropped (it was only ethminer's) and the new and extendedminer_getstatdetail
now produces this outputThis PR contains also integration to documentation.
Solution submission
I've discovered that each miner having it's own strand in it's own thread could have lead to collisions while submitting. I managed to have each miner submit to Farm's strand so the solution submission to socket is guaranteed to be serialized with the benefit of an overhead reduction due to only one relevant strand instead of one per miner. (less work for the io_service).
Solution and stats data accounting
Until now all accounting operations were somewhat disorganized (technically there was one vector for each type of information to be collected (solutions and their conditions, hashrates, status of miners) which also implied there were several functions to call to pull data from farm. In addition all vectors where increased dynamically with a lot of checks like
if (vector.size() < index)
.I've reworked all accounting under a single structure
TelemetryType
which holds several members ofTelemetryAccountType
which is initialized with room for each miner thread.Each structure type has it's own
str()
method which helps to remove a lot of definition for operator<<
while allowing to treat the output as strings.Terminal logging
The output of the rework appears like this (with LOG_PER_GPU) enabled
The first 3 chunks of the log line are
Ax:Wx:Rx:Fx
) where A means accepted, W means wasted (solutions produced when there is no connection available - eg. during a switch), R means rejected, F means failed (only if--no-eval
is not set)In curly braces (array) the detail for each miner which is now labelled by a more significant cu/cl prefix (cuda or opencl) instead of the previous generic "gpu"
Each miner reports it's hashing speed (with the same order of magnitude of the overall hashing speed -ie if the farm speed is in Mh then all miner's speeds are in Mh too), the sensors values (if
--HWMON
) and the detail of their solutions (ifLOG_PER_GPU
) : this latter removes the need for a double line printout. (see #1605)