Hangfire
Hangfire jobs are the automatic tasks that run in the background. They are critical for a working Spider platform.
The hangfire jobs (tasks that run automatically in the background) are configured in the Web.config files and can be invoked manually from the IIS server.
The hangfire dashboard is only viewable and usable from the server itself.
Hangfire Link: [SPIDER_DOMAINNAME]/dashboard/hangfire/recurring
Info
The hangfirejobs can only be seen from the IIS server itself in a browser. If the link is not working make sure to edit the windows host file and add the DNS name to the local ipaddress.
Hangfire - configuration
The configuration for the back ground tasks is done with the settings set in the web.config. C:\inetpub\<IWM-site>\web.config
and can be found between the <appSettings>
and </appSettings>
tags. Below is an overview of the default settings of these tasks.
Info
As of 1.8.0, the web configurations below have been moved to the Spider dashboard in Settings > Cronjob
Task - webconfig | Runtime | Configurable in |
---|---|---|
PerformanceTime x | 40 22 ** * | Settings > Cronjob |
CheckSLATime | 00 23 ** * | Settings > Cronjob |
ScrapeTime | /30 8-21 ** | Settings > Cronjob |
NeedsOkTime | /20 8-21 ** | Settings > Cronjob |
MtInfoTime | 0 1 ** * | Settings > Cronjob |
RFNScrapeTime | 10 23 ** * | Settings > Cronjob |
BusinessSupportDateTime | /5 ** * | Settings > Cronjob |
MoveAlarmsToHistoryTime | 20 22 ** * | Settings > Cronjob |
MoveAuditLogsToHistoryNumberOfDaysToRetain | 100 | Settings > Cronjob |
MoveGeneralSignalLogsToHistoryTime | 0 22 ** * | Settings > Cronjob |
MoveRequestsToHistoryTime | 50 22 ** * | Settings > Cronjob |
DeleteBodyContentFromHistoryEnabled | true/false | Worker/Web.config |
DeleteBodyContentFromHistoryTime | 15 1 ** * | worker/Web.config |
DeleteBodyContentFromHistoryNumberOfMonthsToRetain | 1 | worker/Web.config |
DeleteVariablesContentFromHistoryEnabled | true/false | worker/Web.config |
DeleteVariablesContentFromHistoryNumberOfMonthsToRetain | 1 | worker/Web.config |
UpdateWorkstationStatusToInUpdateTime | /20 ** * | Settings > Cronjob |
SaioScheduleTime | expired | |
ImportTaskConfigurationProcessesTime | /2 ** * | Settings > Cronjob |
FillPriorityInWorkloadTime | expired | |
ErrorLogEnabled | true/false | Settings > Cronjob |
DeleteErrorLogContentTime | 45 1 ** * | Settings > Cronjob |
DeleteErrorLogContentNumberOfDaysToRetain | 30 | Settings > Cronjob |
Hangfire - tasks
Task - Hangfire | Active |
---|---|
Performance (1) | expired |
ProvideBusinessSupportData | yes |
RFNScrape | yes |
NeedsOk | yes |
Scrape | yes |
MtInfo | yes |
MoveGeneralSignalLogsToHistory | yes |
MoveAuditLogsToHistory | yes |
MoveAlarmsToHistory | yes |
CheckSla | yes |
MoveRequestsToHistory | yes |
FillProcessIdInRequest (2) | expired |
FillProcessIdInRequestHistory (3) | expired |
UpdateWorkstationStatusToInUpdate (4) | yes |
SaioScheduler (5) | expired |
ImportTaskConfigurationProcessesTime (6) | yes |
DeleteErrorLogContent (7) | yes |
ProvideBusinessSupportConfigurationData(8) | yes |
(1): This task is no longer active since 1.6.7
(2): This is a one-time task which must be run when upgrading from version 1.6.6+. - This will populate SQL Request table with the ProcessId - This job is dropped in 1.8.0
(3): This is a one-time task which must be run when upgrading from version 1.6.6+. - This will populate SQL Request table with the ProcessId - This job is dropped in 1.8.0
(4): This is a task that puts a workstation on an inUpdate
state or takes it out.
(5): This job was dropped in version 1.7.0. A change has been made to the Spider HeartBeatservice that replaces this HangfireJob.
(7): This task was added from version 1.7.0 and becomes visible in Hangfire so long as the status of ErrorLogEnabled
is set to true.
(8): New in version 1.11.0. This task sends SAIO and Spider license information as well as Spider processes to the Nidaros BSD dashboard.
General jobs
ProvideBusinessSupport ConfigurationData
This task sends additional information from the RPA platform to the Nidaros BSD for monitoring our customers. This task is particularly important for Nidaros to monitor the processes of our customers.
Would you like Nidaros to take over your monitoring as well? If so, please contact us to discuss the possibilities.
SAIO Information
Starting from SAIO 6.1, we can retrieve license information and send it to our BSD via the Spider. This way, we can advise our customers more quickly and effectively if an upgrade is necessary.
Spider License Information
We have also expanded this task to retrieve Spider license information. This allows us to advise our customers more quickly and effectively if an upgrade is necessary.
Spider Processes
Additionally, this task sends the Spider processes (encrypted) to our BSD.
CheckSLA Time
This task checks if there are any requests that have completed outside the SLA time. If so, the "flag" SLA Expired is set to True and an alarm is generated with the message:
Request nr
exceeded the agreed maximum lead time for process: processname
.
Scrape Time
This task picks up all requests with failed statuses:
- Request_FailReported_Name
- Request_WorkFailed_Name
- Request_StagingFailed_Name
And forwards these to a final state: Request_RecordFailed_Name. It also writes out a corresponding AuditTrail notification. After this, the task starts working with the Workloads that have been stuck with the status Workload_RobotProcess_Name or Workload_RecordRequested_Name
Conditions for picking these up are:
- The process must be Scrapable (CanbeScraped = True)
- Scraping must be on on the process step.
The actions that are performed next are:
- Workload with the status Workload_RecordRequested_Name is always reset to Workload_RecordNew_Name. An associated AuditTrail message is also written.
- Workload with the status Workload_RobotProcess_Name is always passed to Workload_RobotFailed. Also, a SendFailEmail is sent to the user. Finally, an associated AuditTrail notification is written.
NeedsOk Time
Picks up all requests that have been left on NeedsOk for too long and sends the Emails to the appropriate people who should receive a NeedsOk notification, when within the interval.
RFN Scrape Time
This task picks up all requests with the status RFN that were completed more than 7 days ago. These requests get the status RRF. Also a notification is written to the Audit Trail:
Request nr
Process: Request Failed Not Reported Scraping Process at date
Details: Request status changed from RFN to RRF.
MtInfo & Business Support
MtInfoTime
This task ensures that the management information is generated and sent by Email to the appropriate people added in the MtInfo SLA. The Mtinfo can be sent daily, weekly or monthly. The frequency period can now be set per user in the SLA of a process.
Business SupportData Time
If it is desired that Nidaros monitor your RPA platform, this setting can be activated in the Spider settings.
This task sends anonymous data by Email or API call to Nidaros Business Support. This can be set in the Spider in Settings > Business Support
.
This allows Nidaros to monitor your processes without requiring Nidaros to physically access your machines.
Task - Historie
The history tasks are used to keep the Spider's main tables clean to ensure performance as well.
Move Alarms To History Time
All alarms with Alarm Record Solved (ARS) status are moved to the history tables.
SQL tables:
- Alarm > AlarmHistory
Move AuditLogs To History Time
All audit logs older than the specified number of days (NumberOfdaysToRetain) (default 100 days), are moved to history.
SQL tables:
- AuditLogs > AuditLogHistory
- AuditLogDetails > AuditLogDetailHistory
Move GeneralSignalLogs To History Time
All general signal logs except the most recent per combination (general signal / workstation) are moved to the history tables. The most recent ones remain in the current table. This is required for the heartbeat functionality
SQL tables:
- GeneralSignalLog > GeneralSignalLogHistory
Move Requests To History Time
All requests with the following statuses are transferred to the history tables.
- Request Record Success (RRS)
- Request Record Failed (RRF)
- Request Record Returned (RRR)
All associated sub-objects, such as audit trail, report log, signallog, staging and workloads are also moved to history.
SQL tables:
- AuditTrail > AuditTrailHistory
- ReportLog > ReportLogHistory
- SignalLog > SignalLogHistory
- Staging > StagingHistory
- Workload > WorkloadHistory
Tasks – Cleaning of Data (privacy)
Spider offers the possibility of data cleaning in Spider. Depending on your requirements for data cleaning this can be set in the Spider web configuration.
Delete BodyContent From History
Prerequisites:
- The value of DeleteBodyContentFromHistoryEnabled should be set to True.
Then the DeleteBodyContentFromHistoryTime task is triggered.
All body information from a request and the staging table older than the specified number of months (DeleteBodyContentFromHistoryNumberOfMonthsToRetain) is deleted and overwritten with the text: The body has been deleted for privacy reasons at <current_date + time
Delete Variables Content From History
Prerequisites:
- The value of DeleteVariablesContentFromHistoryEnabled should be set to True.
Then the DeleteVariablesContentFromHistoryTime task is triggered All variable information of the staging table older than (DeleteVariablesContentFromHistoryNumberOfMonthsToRetain) is deleted overwritten with the message:
The variables have been deleted for privacy reasons at <current_date + time>
.
Tasks - SAIO Connection
There are two tasks that are used when rescheduling a SAIO robot. More information about the statuses can be found at Workstation in Update.
Update Workstation Status To In Update
This updates the status of the workstation in the Spider. Once a workstation enters a maintenance schedule, this task updates the status of the worksattion to "in Update". If a schedule expires, this task resets the status of the workstation to the previous status.
SAIO Scheduler
This job was dropped in version 1.7.0. A change has been made to the Spider HeartBeatservice that replaces this HangfireJob.
Import Task Configuration Processes Time
This task will run by default every 2 minutes. It will check if there are Import task configurations that needs to be added to the Todo. When the job runs a timer will be set and used. When a new job needs to run every 10 minutes, the Hangfirejob will set a starttimer. Until the timer matches or exeeds the time+interval setting a new request will be added to the Todo.
Delete Content - Spiderlogging
If debugging is needed to find out what goes wrong with API calls, this feature can be activated in the Worker Webconfig.
Activate Worker Webconfig settings:
- Enable
ErrorLogEnabled
by changing the value to true - Adjust the
DeleteErrorLogContentNumberOfDaysToRetain
to the desired number. All data older than x will then be automatically deleted from the SQLErrorLog
table - Adjust the time of
DeleteErrorLogContentTime
when it should run.
| ErrorLogEnabled | true |
| DeleteErrorLogContentTime | 45 1 * * *
|
| DeleteErrorLogContentNumberOfDaysToRetain | 10 |
- Restart IIS after this.
- Refresh the Worker swagger page or make an API call to the WorkerAPI to activate the API..
Once the ErrorLogEnabled is set to true, all API calls in the workerAPI that go wrong are logged in the SQL table ErrorLog
. This provides an initial insight for debugging.
It is recommended to use the Logging feature only when problems occur. After debugging, set the ErrorLogEnabled
back to false
and restart IIS.
Delete Content - Example
Trigger a Report with a ReportID that does not exist in the Spider.
Output API Call:
"Message": "There was an error adding the report log: Cannot find the specified Report"
SQL: Errorlog ErrorLogId: 1 RequestId: 6587 WorkloadId: NULL InsertDt: 2022-11-30 12:07:58.087
Message:
RequestId: 6587 WorkloadId: N/A Url: /worker/v3/reportlog HTTP Method: POST Details: There was an error adding the report log: Nidaros.IWM.Library.Exceptions.ForeignKeyException: Cannot find the specified Report at Nidaros.IWM.Library.Exceptions.ExceptionHandler.HandleDBException(Exception exception) in D:\Sources\iwm_dashboard\APIs\Library\Exceptions\ExceptionHandler.cs:line 49 at Nidaros.IWM.DataLayer.Components.ReportLogComponent.<Add>d__7.MoveNext() in D:\Sources\iwm_dashboard\APIs\DataLayer\Components\ReportLogComponent.cs:line 138 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Nidaros.IWM.MiddleTier.Managers.ReportLogManager.<AddAsync>d__3.MoveNext() in D:\Sources\iwm_dashboard\APIs\MiddleTier\Managers\ReportLogManager.cs:line 61 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Nidaros.IWM.RobotAPI.Controllers.Base.ApiExtendedBaseController`1.<Post>d__0`1.MoveNext() in D:\Sources\iwm_dashboard\APIs\RobotAPI\Controllers\Base\ApiExtendedBaseController.cs:line 0