Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple dependency-free ALSA test rig for PCM capture analysis. #8681

Closed
wants to merge 1 commit into from

Conversation

andyross
Copy link
Contributor

@andyross andyross commented Jan 1, 2024

Just drop this script on a test device to run it. No tools to build, no dependencies to install. Confirmed to run on Python 3.8+ with nothing more than the core libraries and a working libasound.so.2 visible to the runtime linker.

When run without arguments, the tool will record from the capture device for the specified duration, then emit the resulting samples back out the playback device without processing (except potentially to convert the sample format from s32_le to s16_le if needed, and to discard any channels beyond those supported by the playback device).

Passing --chirp-test enables a playback-to-capture latency detector: the tool will emit a short ~6 kHz wave packet via ALSA's mmap interface (which allows measuring and correcting for the buffer latency from the userspace process) and simultaneously loop on short reads from the capture device looking for the moment it arrives.

Passing --echo-test enables a capture-while-playback test. The script will play a specified .wav file ("noise.wav" by default) for the specified duration, while simultaneously capturing, and report the "power" (in essentially arbitrary units, but it's linear with actual signal energy assuming the sample space is itself linear) of the captured data to stdout at the end of the test.

Just drop this script on a test device to run it.  No tools to build,
no dependencies to install.  Confirmed to run on Python 3.8+ with
nothing more than the core libraries and a working libasound.so.2
visible to the runtime linker.

When run without arguments, the tool will record from the capture
device for the specified duration, then emit the resulting samples
back out the playback device without processing (except potentially to
convert the sample format from s32_le to s16_le if needed, and to
discard any channels beyond those supported by the playback device).

Passing --chirp-test enables a playback-to-capture latency detector:
the tool will emit a short ~6 kHz wave packet via ALSA's mmap
interface (which allows measuring and correcting for the buffer
latency from the userspace process) and simultaneously loop on short
reads from the capture device looking for the moment it arrives.

Passing --echo-test enables a capture-while-playback test.  The script
will play a specified .wav file ("noise.wav" by default) for the
specified duration, while simultaneously capturing, and report the
"power" (in essentially arbitrary units, but it's linear with actual
signal energy assuming the sample space is itself linear) of the
captured data to stdout at the end of the test.

Signed-off-by: Andy Ross <[email protected]>
@andyross
Copy link
Contributor Author

andyross commented Jan 1, 2024

SOF has sort of a gap with ALSA-API-level tests of playback and capture behavior. I see multiple single-pipeline tests using aplay/arecord, but nothing that works well for getting numbers out of AEC performance for things like latency and correction gain when using two pipelines in tandem. This is an attempt to plug the hole.

I'm particularly proud of the all-python (via ctypes) implementation, which is easier for all users, but in particular allows this to be dropped directly on an ARM chromebook without having to decide on a cross compilation strategy for SOF's host tools.

While this is aimed at AEC integration work, there's no dependency there and you can use this for any SOF capture pipe on any system. There is a Realtek-specific "--disable-rtnr" argument (which does what it says, as noise reduction seems to interfere with the chirp test), but that's just a wrapper around an ALSA control that's a noop if not present.

Finally, I dropped this in the top level ./scripts directory for visibility. There doesn't seem to be an obvious home in ./tools or ./test. Let me know if there's a better place.

And please review, obviously. I baldly claim this is a "simple" tool, but in fact some of the analysis gets a little subtle and I wouldn't be surprised if I had a few units bugs or missing corrections in there.

@marc-hb
Copy link
Collaborator

marc-hb commented Jan 2, 2024

Finally, I dropped this in the top level ./scripts directory for visibility. There doesn't seem to be an obvious home in ./tools or ./test. Let me know if there's a better place.

I think this belongs to either https://github.com/thesofproject/sof-test or (even better) https://github.com/alsa-project/alsa-utils if it's not SOF-specific. Because unlike unit tests, you don't want that script to change when you're git bisecting https://github.com/thesofproject/sof.

git bisect is not the only consideration when choosing which git repo to use but it's the simplest and best litmus test.

Copy link
Collaborator

@marc-hb marc-hb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opts.add_argument("--chirp-test", action="store_true", help="Test latency with synthesized audio")
opts.add_argument("--echo-test", action="store_true", help="Test simultaneous capture/playback")

opts = opts.parse_args()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see no reason for this code to be at the top-level while everything else is in either a class or a function.

def parse_args():
    global opts
    ...

# the specified duration, while simultaneously capturing, and report
# the "power" (in essentially arbitrary units, but it's linear with
# actual signal energy assuming the sample space is itself linear) of
# the captured data to stdout at the end of the test.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assign this to a new HELP_STRING constant and then:

argparse.ArgumentParser(description=HELP_STRING,...
# or
argparse.ArgumentParser(epilog=HELP_STRING,...


There's is stuff like https://stackoverflow.com/questions/35917547/python-argparse-rawtexthelpformatter-with-line-wrap if you want to get fancy but don't worry about it for now. Someone else can fix formatting later :-)

# constants. The ALSA C API is mostly-structless and quite simple, so
# this tends to work well without a lot of ctypes use except for an
# occasional constructed integer or byref() pointer.
class ALSA:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks cool and born to be re-used elsewhere later. So, definitely part of alsa-utils or sof-test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we need a deployment strategy and can't just drop the script on the target. I'm all for reuse, but not for complicating interfaces to support code that isn't reused yet. And it's really extremely small and unlikely to evolve much.

See for example the "Regs" class in cavstool (and now acetool). Sure, that's technically a python MMIO register block abstraction and not part of a logging tool, but it's likewise tiny and has lived very happily in-script. And no one seems bothered much by the cut-and-paste, given that the 10x larger enclosing script is itself mostly just a copy: https://github.com/zephyrproject-rtos/zephyr/blob/main/soc/xtensa/intel_adsp/tools/cavstool.py#L328

As for alsa-utils, I can try, but there's not a line of python in that quarter-century-old source tree, so I don't know if I'm up for the fights involved.

Copy link
Collaborator

@marc-hb marc-hb Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See for example the "Regs" class in cavstool (and now acetool). Sure, that's technically a python MMIO register block abstraction and not part of a logging tool, but it's likewise tiny and has lived very happily in-script

That looks like a much smaller re-use potential to me. Either way that's a future discussion.

As for alsa-utils, I can try, but there's not a line of python in that quarter-century-old source tree, so I don't know if I'm up for the fights involved.

Try and immediately fallback on sof-test if that fails. We already have bits of non-SOF specific test code in sof-test.

cc: @perexg

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... just drop the script on the target.

On that topic: alsa-utils is available out of the box (with some usual propagation delays) in every Linux distribution. sof-test is not.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for alsa-utils, I can try, but there's not a line of python in that quarter-century-old source tree, ...

Proving you sort-of-wrong by sheer chance (was looking for something else):

https://github.com/alsa-project/alsa-tests/tree/master/python

PCM_STREAM_CAPTURE = 1
PCM_FORMAT_S16_LE = 2
PCM_FORMAT_S32_LE = 10
PCM_ACCESS_MMAP_INTERLEAVED = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these should be able to be pulled in from the library API no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're just preprocessor symbols in alsa headers, they don't appear in the shared library sadly.

bufs.append(crec[0])
energy += crec[1]
play_buf(b''.join(bufs))
print(f"Energy {energy}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we maybe get better separation of the pcm code and the analysis code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's sort of the idea. I mean, it's a 300-line script, so everything is near everything else. But the way it's structured is that there's are "ALSA" routines like init_stream/do_capture/play_buf that work with the low level stuff and the analysis happens up-stack in the code that calls them.

@andyross
Copy link
Contributor Author

andyross commented Jan 3, 2024

Submitted to sof-test here: thesofproject/sof-test#1144

Looking through, I really don't think alsa-utils is the right spot. Longer term, with some evolution, maybe. But for now this is really best viewed as a smoke test for SOF capture (and AEC in particular, I'm using that chirp with a whitebox internal to the component that I need to get submitted as soon as I can get the rework PR moving).

Unfortunately I can't add reviewers there, so I'm going to leave this one open for now just so I have a place to whine to an audience.

@andyross
Copy link
Contributor Author

andyross commented Jan 3, 2024

(And I'm pretty sure I addressed the existing comments. The best I could do for @cujomalainey was to just reorder everything to put all ALSA-related code first, and put "pcm_" or "ctl_" at the front of the routines that touch ALSA APIs.

@andyross
Copy link
Contributor Author

andyross commented Jan 3, 2024

Close this one. New PR in sof-test is getting lots of review and I think I've addressed all the notes here.

@andyross andyross closed this Jan 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants