Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to extract driver version information #35

Open
JTTan807 opened this issue Sep 27, 2016 · 6 comments
Open

Unable to extract driver version information #35

JTTan807 opened this issue Sep 27, 2016 · 6 comments

Comments

@JTTan807
Copy link

I encounter the following error message after downloading and installing the latest tnvme and dnvme files from github.

/nvme/tnvme# ./tnvme --summary
Parsing cmd line: ./tnvme --summary
tnvme-err:tnvme.cpp:667: Unable to extract driver version information
Unable to build the test foundation
nvme/tnvme#

How do I fix this error?

I checked the version.h in both tnvme and dnvme: the minor version is different.
Not sure whether is it because of this, but when I manually changed one of them to be the same as the other one - the error is still there.

@JTTan807
Copy link
Author

Hi. Really need help on this. I have followed every single step in the Wiki page to get it up but it is still the same. I have trace back the error but found no leads to it.
Please respond as I need to get it running the preset tests, as soon as possible.

What other details do I need to provide for assistance?

@JeffHensel
Copy link
Contributor

Hi JTTan807,

Sorry for the delayed response. There are many things that could have
caused the issue you see.

The most likely issue I would assume is that the dnvme module is not being
correctly installed please follow the below instructions to fix this:

  1. Figure out if any other nvme drivers are currently running by typing:
    lsmod | grep nvme
  2. Disable all nvme drivers by typing (some distributions you will also
    need to rmmod nvme_core): sudo rmmod nvme
  3. Install the dnvme module after you have made it by typing in the dnvme
    folder (to make type make in the dnvme folder): sudo insmod dnvme.ko
  4. Ensure the driver loaded correctly by typing: lsmod | grep dnvme
  5. Make tnvme by typing in the tnvme folder: make
  6. Once complete try to run tnvme again (also be sure to run as sudo user)

I just re-cloned both the repositories from git and I found no error with
the revision numbers. Could you please confirm you are up to date.

If you are still having issues please provide your os and linux kernel.

Thank you,
Jeffrey Hensel
University of New Hampshire

On Thu, Sep 29, 2016 at 4:09 AM, JTTan807 [email protected] wrote:

Hi. Really need help on this. I have followed every single step in the
Wiki page to get it up but it is still the same. I have trace back the
error but found no leads to it.
Please respond as I need to get it running the preset tests, as soon as
possible.

What other details do I need to provide for assistance?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#35 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMvFgotQFpJW6CEctbSeTQ3fayVqqjXVks5qu3IdgaJpZM4KHWJF
.

@JTTan807
Copy link
Author

JTTan807 commented Sep 30, 2016

Hi Jeffrey,
OS: ubuntu 12.04 LTS, 64bit
kernel: 3.13.0-32-generic
Intel® Core™2 Quad CPU Q8200 @ 2.33GHz × 4

I have updated the directory in my make file and changed 'CDIR' to 'SRCDIR' (as below):
Modify the Makefile to point to Linux build tree.
DIST ?= $(shell uname -r)
KDIR:=/lib/modules/$(DIST)/build/
SRCDIR:=/usr/src/linux-headers-3.13.0-32/include/
SOURCE:=$(shell pwd)
DRV_NAME:=dnvme

1)with lsmod | grep nvme , there is one 'nvme' module listed and 'Used by' only shows the number '2' instead of what is holding it.
2)cannot disable with sudo rmmod nvme and sudo rmmod -f nvme.
ERROR: Removing 'nvme': Resource temporarily unavailable
also cannot disable with modprobe -r nvme.
ERROR: Module nvme is in use

unable to kill -kill the 2 nvme process running from ps axjf

3)reboot the PC and repeat your steps 1 to 6 above - successful
4)but noticed ls /dev no longer list the following as the SSD drive:
nvme0
nvme0n1
nvme0n1p1
(I have an Intel SSD 750 series drive connected)

It now only list the drive as nvme0

The reason I mentioned this is because the drive failed some of the tests, i.e.:
[failed test: 0:0.0.0: Test: Validate all PCI registers syntactically.]
[failed test: 4:0.1.0: Test: Issue illegal nvm cmd set opcodes.]
[failed test: 5:3.0.0: Test: Issue cmds until both ASQ and ACQ fill up.]
[skipped test: 6:0.3.0: Test: Set unsupported/rsvd fields in cmd]

I do not know whether is it related to the device listed, i.e. different driver being used.

Thanks.
JT

@JeffHensel
Copy link
Contributor

Hi JT,

The reason the devices only shows as nvme0 is due to the fact that this is
the actual device. The naming scheme in linux just names partitions and
formats as nvme0n*** (where *** refers to some numbers etc). The one nvme
device is recognized as nvme0.

As far as the failures and skips these could be possible bugs (device or
script related). With the similar device I do observe the failure with
4:0.1.0, 0:0.0.0 these could possibly be a script issue I will need to look
into it more. However the failure with 5:3.0.0 was not observed. It appears
after running 4:0.1.0 the device may be being thrown into an odd state
until the host is restarted. Could you confirm the same with your device.

The reason 6:0.3.0 skips is there is a flag that is checked regarding if
the test should be skipped or not (see -b or the --rsvdfields flag).

As far as using another driver, tnvme will not run unless it detects dnvme
is installed hence the issue you saw before.

Thank you.
Jeffrey Hensel

On Fri, Sep 30, 2016 at 7:37 AM, JTTan807 [email protected] wrote:

Hi Jeffrey,
OS: ubuntu 12.04 LTS, 64bit
kernel: 3.13.0-32-generic
Intel® Core™2 Quad CPU Q8200 @ 2.33GHz × 4

I have updated the directory in my make file and changed 'CDIR' to
'SRCDIR' (as below):
Modify the Makefile to point to Linux build tree.

DIST ?= $(shell uname -r)
KDIR:=/lib/modules/$(DIST)/build/
SRCDIR:=/usr/src/linux-headers-3.13.0-32/include/
SOURCE:=$(shell pwd)
DRV_NAME:=dnvme

1)with lsmod | grep nvme , there is one 'nvme' module listed and 'Used by'
only shows the number '2' instead of what is holding it.
2)cannot disable with sudo rmmod nvme and sudo rmmod -f nvme.
ERROR: Removing 'nvme': Resource temporarily unavailable
also cannot disable with modprobe -r nvme.
ERROR: Module nvme is in use

unable to kill -kill the 2 nvme process running from ps axjf

3)reboot the PC and repeat your steps 1 to 6 above - successful
4)but noticed ls /dev no longer list the following as the SSD drive:
nvme0
nvme0n1
nvme0n1p1
(I have an Intel SSD 750 series drive connected)

It now only list the drive as nvme0

The reason I mentioned this is because the drive failed some of the tests,
i.e.:
[failed test: 0:0.0.0: Test: Validate all PCI registers syntactically.]
[failed test: 4:0.1.0: Test: Issue illegal nvm cmd set opcodes.]
[failed test: 5:3.0.0: Test: Issue cmds until both ASQ and ACQ fill up.]
[skipped test: 6:0.3.0: Test: Set unsupported/rsvd fields in cmd]

I do not know whether is it related to the device listed, i.e. different
driver being used.

Thanks.
JT


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#35 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMvFgtWgIVzRWBqK9d-_f-ePk1IZg80cks5qvPSKgaJpZM4KHWJF
.

@JTTan807
Copy link
Author

JTTan807 commented Oct 3, 2016

Hi Jeffrey,
Everytime I restart the host (drive is still physically connected to the system), the OS nvme kernel gets loaded by default. I have to manually rmmod nvme before adding back dnvme.
sudo insmod dnvme.ko will not work, so I used modprobe -v dnvme, instead.

I have restarted the host and only re-run group 5. Still the same failure at 5:3.0.0
I am not sure whether the drive is still in an odd state but the following is part of the terminal dump for this group 5 tests:
...
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:391: Timeout: (cur - init) >= TO: (1475486980692553 - 1475486970683754) >= 10000000
tnvme-err:cq.cpp:352: Timed out waiting 10000 ms for 2 CE's in CQ 0, found 1
tnvme:cq.cpp:192: dnvme metrics pertaining to CQ ID: 0
tnvme:cq.cpp:193: tail_ptr = 1
tnvme:cq.cpp:194: head_ptr = 0
tnvme:cq.cpp:195: elements = 2
tnvme:cq.cpp:196: irq_enabled = T
tnvme:cq.cpp:197: irq_no = 0
tnvme:cq.cpp:198: pbit_new_entry = 1
tnvme:cq.cpp:354: qMetrics.head_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:cq.cpp:356: qMetrics.tail_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:358: qMetrics.head_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:360: qMetrics.tail_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:io.cpp:390: The CQ's metrics before reaping holds head_ptr
tnvme:kernelAPI.cpp:243: CQMetrics.q_id = 0x0000
tnvme:kernelAPI.cpp:244: CQMetrics.tail_ptr = 0x0001
tnvme:kernelAPI.cpp:245: CQMetrics.head_ptr = 0x0000
tnvme:kernelAPI.cpp:246: CQMetrics.elements = 0x0002
tnvme:kernelAPI.cpp:247: CQMetrics.irq_enabled = T
tnvme:kernelAPI.cpp:248: CQMetrics.irq_no = 0
tnvme:kernelAPI.cpp:249: CQMetrics.pbit_new_entry = 1
tnvme:io.cpp:393: Reaping CE from CQ 0, requires memory to hold CE
tnvme:memBuffer.cpp:165: Init buffer; size: 0x00000010, init: 0, value: 0x00
tnvme:cq.cpp:439: Reaped 1 CE's, 0 remain, from CQ 0, ISR count: 0
tnvme:ce.cpp:146: Decode: CE.status=0x0000
tnvme:ce.cpp:146: SCT = 0x0 (generic cmd status)
tnvme:ce.cpp:146: SC = 0x00 (The cmd completed successfully)
tnvme:cq.cpp:391: Timeout: (cur - init) >= TO: (1475486990693201 - 1475486980693199) >= 10000000
tnvme-err:cq.cpp:352: Timed out waiting 10000 ms for 1 CE's in CQ 0, found 0
tnvme:cq.cpp:192: dnvme metrics pertaining to CQ ID: 0
tnvme:cq.cpp:193: tail_ptr = 1
tnvme:cq.cpp:194: head_ptr = 1
tnvme:cq.cpp:195: elements = 2
tnvme:cq.cpp:196: irq_enabled = T
tnvme:cq.cpp:197: irq_no = 0
tnvme:cq.cpp:198: pbit_new_entry = 1
tnvme:cq.cpp:354: qMetrics.head_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:356: qMetrics.tail_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:358: qMetrics.head_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:cq.cpp:360: qMetrics.tail_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:buffers.cpp:86: Dumping to filename: ./Logs/GrpPending/GrpQueues.AdminQFull_r10b.acq.Identify
tnvme:buffers.cpp:87: Dump entire ACQ
tnvme-err:frmwkEx.cpp:60: Exception: adminQFull_r10b.cpp:#164: FAILURE: Unable to see last CE as expected
tnvme:kernelAPI.cpp:271: Write custom string to dnvme's log output: "-------START POST FAILURE STATE DUMP-------"
tnvme:frmwkEx.cpp:76: -------------------------------------------
tnvme:frmwkEx.cpp:77: -------START POST FAILURE STATE DUMP-------
tnvme:ctrlrConfig.cpp:235: Disabling completely the NVME device
tnvme:objRsrc.cpp:199: Group level resources are being freed: 0
tnvme:frmwkEx.cpp:95: --------END POST FAILURE STATE DUMP--------
tnvme:frmwkEx.cpp:96: -------------------------------------------
tnvme:group.cpp:295: 5: GrpQueues: Validates general queue functionality
tnvme:group.cpp:300: 3.0.0: Test: AdminQFull_r10b: Issue cmds until both ASQ and ACQ fill up.
tnvme:group.cpp:301: ------------------END TEST------------------
tnvme:testResults.cpp:81: Iteration SUMMARY
tnvme:testResults.cpp:86: passed : 3
tnvme:testResults.cpp:84: failed : 1 <---
tnvme:testResults.cpp:86: skipped : 0
tnvme:testResults.cpp:86: informative : 0
tnvme:testResults.cpp:89: total tests : 4
tnvme:testResults.cpp:90: total groups : 1
tnvme:testResults.cpp:91: Stop loop execution #0
tnvme:tnvme.cpp:833: Detailed Iteration SUMMARY
tnvme:tnvme.cpp:834: Tests Failed :
tnvme:tnvme.cpp:838: 5:3.0.0
tnvme:tnvme.cpp:840: Tests Skipped :
FAILURE: testing
/nvme/tnvme#

Noted on tnvme skipping 6:0.3.0 (still the same after reruns), although I could not find the flag being checked.

Yes, tnvme will only run when dnvme driver module is running but the OS nvme driver is loaded by default upon an OS restart. I did not do a new make for both dnvme and tnvme, but instead rmmod nvme and modprobe dnvme.

Thank you.
JT

@JeffHensel
Copy link
Contributor

Hi JT,

It appears the script/tool is running correctly.

This being said I am not sure if this specific script correctly shows a
valid fail or not. Based on the part of the log you copied in is that the
device is not responding before the timeout is reached this can be caused
by the specified timeout in the command not being long enough.

As far as 6:0.3.0 you should be able to run the script if you specify it to
run with the -b flag. Else we would expect it to be skipped. Skipped tests
are tests that the device would not be expected to be run with (ie an
unsupported command is used). However this specific test is a test that was
created to test reserved bits (which at the time was not a mandatory
functionality in NVMe). Please note it is likely that devices would be
found to fail these tests if your run with the wrong spec revision (please
see the -v or --rev flags).

Hopefully this clears up the issues, please let me know if you would like
to know more.

Thank you,
Jeffrey Hensel
University of New Hampshire

On Mon, Oct 3, 2016 at 6:22 AM, JTTan807 [email protected] wrote:

Hi Jeffrey,
Everytime I restart the host (drive is still physically connected to the
system), the OS nvme kernel gets loaded by default. I have to manually
rmmod nvme before adding back dnvme.
sudo insmod dnvme.ko will not work, so I used modprobe -v dnvme, instead.

I have restarted the host and only re-run group 5. Still the same failure
at 5:3.0.0
I am not sure whether the drive is still in an odd state but the following
is part of the terminal dump for this group 5 tests:
...
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:274: 1 CE's awaiting attention in CQ 0, ISR count: 0
tnvme:cq.cpp:391: Timeout: (cur - init) >= TO: (1475486980692553 -
1475486970683754) >= 10000000
tnvme-err:cq.cpp:352: Timed out waiting 10000 ms for 2 CE's in CQ 0, found
1
tnvme:cq.cpp:192: dnvme metrics pertaining to CQ ID: 0
tnvme:cq.cpp:193: tail_ptr = 1
tnvme:cq.cpp:194: head_ptr = 0
tnvme:cq.cpp:195: elements = 2
tnvme:cq.cpp:196: irq_enabled = T
tnvme:cq.cpp:197: irq_no = 0
tnvme:cq.cpp:198: pbit_new_entry = 1
tnvme:cq.cpp:354: qMetrics.head_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:cq.cpp:356: qMetrics.tail_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:358: qMetrics.head_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:360: qMetrics.tail_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:io.cpp:390: The CQ's metrics before reaping holds head_ptr
tnvme:kernelAPI.cpp:243: CQMetrics.q_id = 0x0000
tnvme:kernelAPI.cpp:244: CQMetrics.tail_ptr = 0x0001
tnvme:kernelAPI.cpp:245: CQMetrics.head_ptr = 0x0000
tnvme:kernelAPI.cpp:246: CQMetrics.elements = 0x0002
tnvme:kernelAPI.cpp:247: CQMetrics.irq_enabled = T
tnvme:kernelAPI.cpp:248: CQMetrics.irq_no = 0
tnvme:kernelAPI.cpp:249: CQMetrics.pbit_new_entry = 1
tnvme:io.cpp:393: Reaping CE from CQ 0, requires memory to hold CE
tnvme:memBuffer.cpp:165: Init buffer; size: 0x00000010, init: 0, value:
0x00
tnvme:cq.cpp:439: Reaped 1 CE's, 0 remain, from CQ 0, ISR count: 0
tnvme:ce.cpp:146: Decode: CE.status=0x0000
tnvme:ce.cpp:146: SCT = 0x0 (generic cmd status)
tnvme:ce.cpp:146: SC = 0x00 (The cmd completed successfully)
tnvme:cq.cpp:391: Timeout: (cur - init) >= TO: (1475486990693201 -
1475486980693199) >= 10000000
tnvme-err:cq.cpp:352: Timed out waiting 10000 ms for 1 CE's in CQ 0, found
0
tnvme:cq.cpp:192: dnvme metrics pertaining to CQ ID: 0
tnvme:cq.cpp:193: tail_ptr = 1
tnvme:cq.cpp:194: head_ptr = 1
tnvme:cq.cpp:195: elements = 2
tnvme:cq.cpp:196: irq_enabled = T
tnvme:cq.cpp:197: irq_no = 0
tnvme:cq.cpp:198: pbit_new_entry = 1
tnvme:cq.cpp:354: qMetrics.head_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:356: qMetrics.tail_ptr dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 1, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 1, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 1, DWORD2: 0x00000000
tnvme:cq.cpp:248: CQ 0, CE 1, DWORD3: 0x00000000
tnvme:cq.cpp:358: qMetrics.head_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:cq.cpp:360: qMetrics.tail_ptr+1 dump follows:
tnvme:cq.cpp:244: Logging Completion Element (CE)...
tnvme:cq.cpp:245: CQ 0, CE 0, DWORD0: 0x00000000
tnvme:cq.cpp:246: CQ 0, CE 0, DWORD1: 0x00000000
tnvme:cq.cpp:247: CQ 0, CE 0, DWORD2: 0x00000001
tnvme:cq.cpp:248: CQ 0, CE 0, DWORD3: 0x00010000
tnvme:buffers.cpp:86: Dumping to filename: ./Logs/GrpPending/GrpQueues.
AdminQFull_r10b.acq.Identify
tnvme:buffers.cpp:87: Dump entire ACQ
tnvme-err:frmwkEx.cpp:60: Exception: adminQFull_r10b.cpp:#164: FAILURE:
Unable to see last CE as expected
tnvme:kernelAPI.cpp:271: Write custom string to dnvme's log output:
"-------START POST FAILURE STATE DUMP-------"
tnvme:frmwkEx.cpp:76: -------------------------------------------
tnvme:frmwkEx.cpp:77: -------START POST FAILURE STATE DUMP-------
tnvme:ctrlrConfig.cpp:235: Disabling completely the NVME device
tnvme:objRsrc.cpp:199: Group level resources are being freed: 0
tnvme:frmwkEx.cpp:95: --------END POST FAILURE STATE DUMP--------
tnvme:frmwkEx.cpp:96: -------------------------------------------
tnvme:group.cpp:295: 5: GrpQueues: Validates general queue functionality
tnvme:group.cpp:300: 3.0.0: Test: AdminQFull_r10b: Issue cmds until both
ASQ and ACQ fill up.
tnvme:group.cpp:301: ------------------END TEST------------------
tnvme:testResults.cpp:81: Iteration SUMMARY
tnvme:testResults.cpp:86: passed : 3
tnvme:testResults.cpp:84: failed : 1 <---
tnvme:testResults.cpp:86: skipped : 0
tnvme:testResults.cpp:86: informative : 0
tnvme:testResults.cpp:89: total tests : 4
tnvme:testResults.cpp:90: total groups : 1
tnvme:testResults.cpp:91: Stop loop execution #0
tnvme:tnvme.cpp:833: Detailed Iteration SUMMARY
tnvme:tnvme.cpp:834: Tests Failed :
tnvme:tnvme.cpp:838: 5:3.0.0
tnvme:tnvme.cpp:840: Tests Skipped :
FAILURE: testing
/nvme/tnvme#

Noted on tnvme skipping 6:0.3.0 (still the same after reruns), although I
could not find the flag being checked.

Yes, tnvme will only run when dnvme module is running but the OS nvme
driver is loaded by default upon a restart.

Thank you.
JT


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#35 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMvFguneynVIB_mtI1FhY9EYL8t_eMqJks5qwNdrgaJpZM4KHWJF
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants