Please download NVivo 12 from QSR International and use the license code referenced above to activate the software. ( -qualitative-data-analysis-software/support-services/nvivo-downloads/older-versions)
K-State recently purchased a campus site license for NVivo 10, a qualitative and mixed methods data analytics software, for its faculty, staff, administrators, and graduate students, who may download the software from the protected site after signing in with their eID/password. To activate the software, users will need the unique K-State license key. The software is only for that individual's professional use and may be installed on up to two computing devices, typically a desktop machine and a laptop.
Nvivo Two Devices
Electrical outlets and USB plugs for charging personal devices are available throughout Waldo Library, including right on the furniture on the first floor. Various device chargers can be checked out at the service desk.
NVivo is software that supports qualitative and mixed methods research. NVivo is designed to help you organize, analyze and find insights in unstructured, or qualitative data, such as: interviews, open-ended survey responses, articles, social media and web content. NVIVO is free for all staff, faculty, and students to use on campus, both on their work and personal devices.
During the onboarding process, the Devices list is gradually populated with devices as they begin to report sensor data. Use this view to track your onboarded endpoints as they come online, or download the complete endpoint list as a CSV file for offline analysis.
If you export the device list, it will contain every device in your organization. It might take a significant amount of time to download, depending on how large your organization is. Exporting the list in CSV format displays the data in an unfiltered manner. The CSV file will include all devices in the organization, regardless of any filtering applied in the view itself.
During the Microsoft Defender for Endpoint onboarding process, devices onboarded to MDE are gradually populated into the device inventory as they begin to report sensor data. Following this, the device inventory is populated by devices that are discovered in your network through the device discovery process. The device inventory has three tabs that list devices by:
The device inventory opens on the Computers and Mobile tab. At a glance you'll see information such as device name, domain, risk level, exposure level, OS platform, onboarding status, sensor health state, and other details for easy identification of devices most at risk.
Device discovery Integrations with Microsoft Defender for IoT and Corelight are available to help locate, identify, and secure your complete OT/IOT asset inventory. Devices discovered with these integrations will appear on the IoT devices tab. For more information, see Device discovery integrations.
At the top of each device inventory tab, you can see the total number of devices, the number of devices that are not yet onboarded, and the number of devices that have been identified as a higher risk to your organization. You can use this information to help you prioritize devices for security posture improvements.
If you've tried these steps and they didn't resolve your issue, please reach out. Email our team at nvivo-rc@sfu.ca with your questions, or book an appointment with an NVivo expert using the NVivo Consultation Request form.
Alert: You must have the current version of iFullerton installed and cannot have any Bluetooth devices (like earbuds) connected to your phone or tablet as the Bluetooth devices might prevent your phone from connecting and recording your attendance.
Results: Each midwife from Facility A used NeoBeat on an estimated 373 newborns, while each midwife at facilities B and C used NeoBeat an average 24 and 47 times, respectively. From FGDs with 30 midwives, we identified five main categories of perceptions and experiences regarding the use of NeoBeat: (1) Providers' initial skepticism evolved into pride and a belief that NeoBeat was essential to resuscitation care, (2) Providers viewed NeoBeat as enabling their resuscitation and increasing their capacity, (3) NeoBeat helped providers identify flaccid newborns as liveborn, leading to hope and the perception of saving of lives, (4) Challenges of use of NeoBeat included cleaning, charging, and insufficient quantity of devices, and (5) Providers desired to continue using the device and to expand its use beyond resuscitation and their own facilities.
Conclusion: Midwives perceived that NeoBeat enabled their resuscitation practices, including assisting them in identifying non-breathing newborns as liveborn. Increasing the quantity of devices per facility and developing systems to facilitate cleaning and charging may be critical for scale-up.
SmartStep (Andante medical devices Ltd, Beer Sheva Israel) consists of flexible insoles containing two separate air pockets (one for the forefoot and one for the hind foot). Tubes are used to inflate/deflate each pocket and to connect the pockets to microprocessor control unit, that is worn around the ankle. The microprocessor control unit contains two pressure sensors and is also functioning as a feedback unit by producing an audio signal when a preset WB value is reached. A Software application on a PC is used to preset upper and lower WB thresholds and to record and analyze WB data. In the online mode, SmartStep communicates via wireless Bluetooth USB adapter with a computer.
First, measurement procedures were explained to the patient and patient characteristics were collected. Before the usability testing began, patients watched an instructional video to explain the think aloud method based on an example. Patients were asked to use the biofeedback devices one after the other during a training session in which PWB with crutches was practiced. Patients were asked to complete a cluster of specific tasks with the devices. Shortly before executing a particular cluster of tasks, patients watched an instructional video with an explanation of device functions regarding the task to perform. The order of the device presentation was randomised to avoid learning effect and was similar to the allocated testing order. The order of the cluster of tasks was not randomised and occurred in the same order as when patients used the device for the first time.
The clusters of tasks were: putting on the device and using the biofeedback device during PWB. Patients were asked to think aloud when carrying out the tasks. Patients were also encouraged to think aloud by using standardized phrases. The session was videotaped. After completion, the session with the other device started, following the same procedures. Subsequently, patients were asked to fill in the SUS questionnaire per biofeedback device after testing both devices, followed by a semi-structured interview that took approximately 20 minutes.
Although the primary research aim was to describe and not compare the usability, additional tests (paired samples t-tests or the nonparametric Wilcoxon Signed Ranks Tests, both 2-tailed and α = .05) were used to compare the SUS Scores and User Performances for both devices within both perspectives. The assumption of normality was tested with the Shapiro-Wilk test. When the assumption of normality was violated the Wilcoxon Signed Ranks Test was used instead of the t-test.
The results of the SUS for both biofeedback devices are shown in Table 3. Mean SUS scores of at least 62.7 were considered as acceptable usability and SUS scores were graded according to the CGS as described in the method section. For both SmartStep and OpenGo Science the mean SUS score of patients was above 62.7, i.e., six and eight patients respectively considered the usability as acceptable. The mean SUS score of PTs was below 62.7 for SmartStep and above 62.7 for OpenGo Science, respectively three and eight PTs respectively considered SmartStep and OpenGo-Science as acceptable.
The SUS scores of SmartStep and OpenGo Science were compared with each other. All distributions of the differences between the devices were normally distributed. The paired sample t-test showed no statistically significant difference in SUS scores between Smart Step and OpenGo Science for the patients, the mean difference was 8.6 (SD = 19.6) points on the SUS (t(8) = 1.3, p = .223). The difference in SUS scores between SmartStep and OpenGo Science for PTs was tested with the paired samples t-test as well. The mean difference of 27.5 (SD = 14.8) points was statistically significant (t(8) = 5.6, p = .001) in favour of OpenGo Science.
Satisfaction with the devices extracted from the think-aloud data and the open questions, is illustrated by thematically categorized examples of quotes from patients and PTs shown in S1 Table. The results showed mixed views and perceptions from patients and PTs on satisfaction. A selection of meaningful quotes is presented for each perspective in the text below. After each quote, the participant code and the involved biofeedback device is given (SM = SmartStep and OG = OpenGo Science).
Looking at acceptability, in general, patient acceptability for SmartStep and OpenGo Science for use during supervised rehabilitation was good. When looking at PT acceptability, the data about intention to purchase in the future suggested poor acceptability for SmartStep and good acceptability for OpenGo-Science. Thereby, it should be noted that PTs were not informed concerning the real price of SmartStep and OpenGo-Science. This could have influenced the acceptability because high costs might undermine acceptance, as being a critical determinant of technology acceptance [45]. Reasons for non-acceptance of SmartStep by the PTs emerging from the interview were: complexity of the device, intrusiveness of a control unit around the ankle, intrusiveness of the audio feedback and the availability of more usable devices. 2ff7e9595c
Comments