<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-lechler-mlcodec-test-battery-01" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="MlCodecTestBattery">Test Battery for Opus ML Codec Extensions</title>
    <seriesInfo name="Internet-Draft" value="draft-lechler-mlcodec-test-battery-01"/>
    <author fullname="Laura Lechler">
      <organization>Cisco Systems</organization>
      <address>
        <postal>
          <country>United Kingdom</country>
        </postal>
        <email>llechler@cisco.com</email>
      </address>
    </author>
    <author fullname="Kamil Wojcicki">
      <organization>Cisco Systems</organization>
      <address>
        <postal>
          <country>Australia</country>
        </postal>
        <email>kamilwoj@cisco.com</email>
      </address>
    </author>
    <date year="2025" month="October" day="20"/>
    <area>Applications and Real-Time</area>
    <workgroup>Machine Learning for Audio Coding</workgroup>
    <keyword>mushra</keyword>
    <keyword>drt</keyword>
    <keyword>evaluation</keyword>
    <abstract>
      <?line 208?>

<t>This document proposes methodology and data for evaluation of machine learning (ML) codec extensions,
such as the deep audio redundancy (DRED), within the Opus codec (RFC6716).</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-lechler-mlcodec-test-battery/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Machine Learning for Audio Coding Working Group mailing list (<eref target="mailto:mlcodec@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/mlcodec/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/mlcodec/"/>.
      </t>
    </note>
  </front>
  <middle>
    <?line 214?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>The IETF machine learning for audio coding (mlcodec) working group aims to 
leverage current and future opportunities presented by ML codecs 
to enhance the Opus codec <xref target="RFC6716"/> and its extensions, 
including to improve speech coding quality and robustness to packet loss. 
Effective evaluation of codec extensions (such as DRED),
in both standalone and redundancy settings,
is a crucial factor in achieving those objectives.
It supports reproducibility for existing extensions 
(for instance, by enabling validation of whether a retraining pipeline matches baseline model performance)
and enables benchmarking of future improvements against previously established baselines.</t>
      <t>However, as outlined in subsequent sections, 
effective evaluation of generative ML models presents 
numerous challenges and necessitates specialized subjective 
and objective evaluation methods. 
This document proposes a crowdsourced subjective test battery,
along with associated test datasets, to address the unique requirements 
for accurate and reproducible evaluations of ML codecs.
The proposed test battery covers both speech quality and intelligibility, 
including tests in clean, noisy, and reverberant conditions, 
and incorporates real-world audio data. 
The methodology leverages crowdsourced listeners <xref target="CROWDSOURCED-DRT"/> 
to enable rapid and scalable assessments, 
while controlling the variability associated with non-lab-based measurements.</t>
      <t>In the era of generative ML models, 
reference-based objective metrics face additional limitations, 
while non-intrusive methods struggle with generalization, e.g., <xref target="URGENT2025"/> and <xref target="CROWDSOURCED-MUSHRA"/>. 
Consequently, the use of human listeners, 
the gold standard in both quality and intelligibility assessment, 
is of notable importance.
The generative nature of ML codecs also implies that speech intelligibility 
could be significantly improved and/or degraded by such algorithms. 
For example, human perception for some phoneme categories could be enhanced, 
while confusions might be introduced for others, 
including hallucinations of incorrect phonemes even at high overall perceived quality.
Such confusions may not be easily detected in quality tests, 
highlighting a pressing need for highly diagnostic phoneme-category, 
or even phoneme-level, intelligibility assessment methods.</t>
      <t>The subsequent sections present the methodology, key considerations, 
and further motivation underlying the proposed test battery, 
addressing the challenges and requirements discussed above.</t>
      <section anchor="listening-test-methods">
        <name>Listening Test Methods</name>
        <section anchor="mushra-1s">
          <name>MUSHRA--1S</name>
          <t>MUSHRA--1S <xref target="MUSHRA-1S"/> is a variant of the well-established MUSHRA (multiple stimuli with hidden reference and anchor) methodology for assessing quality <xref target="ITU-R.BS1534-3"/> in clean non-reverberant conditions is proposed for testing and benchmarking of ML codecs. MUSHRA is firstly adapted to a crowdsourced, non-expert listener base, as described in <xref target="CROWDSOURCED-MUSHRA"/>. Particularly for generative models, which may cause hallucinations, a reference-based listening test is preferable <xref target="URGENT2025"/>. Secondly, one system under test is assessed at a time, in the context of a fixed reference and anchor. The advantages of testing one system at a time are the unlimited extendability of test conditions within the quality range of anchor and reference, avoiding context effects of other conditions within the same test, avoiding difficulties associated when merging results across multiple tests, and simplifying the task for the participants thereby avoiding listener fatigue, particularly in non-expert listeners. As such, MUSHRA--1S is similar to absolute category rating (ACR) tests, which can be used to calculate a mean opinion score (MOS), in that it is simple and easily extendable. At the same time, it is more stable than ACR, due to the fixed range of expected audio quality, bound by the anchor and reference. Reference-less MOS scores have been demonstrated to suffer from range-equalizing biases <xref target="COOPER2023"/>, with other samples presented within the same test defining the range of expectation of what constitutes "good" or "bad" speech quality. The drawback of the MUSHRA--1S solution, compared to a traditional MUSHRA test, is the slightly decreased sensitivity to very small differences between similar methods, which may only be detectable in direct comparisons.</t>
        </section>
        <section anchor="dcr">
          <name>DCR</name>
          <t>The degradation category rating (DCR) approach is used to produce a degradation mean opinion score (DMOS) <xref target="ITU-T.P800"/>. Although it is typically used with a high-quality reference, the test is also capable of assessing degradation caused by codecs when tested on mild-to-moderately impaired real-world data <xref target="MULLER2024"/>. The approach is more sensitive than absolute category ratings (ACR) <xref target="ITU-T.P800"/>. An implementation of the test procedure for crowdsourced tests is available in <xref target="ITU-T.P808"/>.</t>
        </section>
        <section anchor="drt">
          <name>DRT</name>
          <t>The diagnostic rhyme test (DRT) <xref target="ITU-T.P807"/> measures speech intelligibility by presenting minimal pairs where the contrasted phonemes differ in terms of a specific, controlled phonetic category. The linguistic and acoustic insight of the DRT, with test items belonging to classes of distinctive
linguistic features which are acoustically interpretable, poses a useful tool for both codec analysis and benchmarking. The test is free from context-effects and memory effects and has a high test sensitivity. It is therefore well-suited for a crowdsourced listener audience. Bearing in mind the principles for crowdsourcing listening tests employed in <xref target="ITU-T.P808"/>, the test was adapted for crowdsourced listening tests in <xref target="CROWDSOURCED-DRT"/> and test vectors in five languages were published <xref target="DRT-REPO"/>. The test data was recently adopted by <xref target="LESCHANOWSKY2025"/>.</t>
        </section>
        <section anchor="crowdsourcing-adaptations">
          <name>Crowdsourcing Adaptations</name>
          <t>Crowdsourced listening tests benefit from rigorous screening and quality control. In addition to the specific implementation of standardized test approaches for crowdsourced listening tests, <xref target="ITU-T.P808"/> has provided useful guiding principles for the adaptation of laboratory-based tests to counteract challenges  posed by the comparatively uncontrolled crowdsourcing environment. For instance, steps of qualification and training are added before the actual test stimuli are presented and catch trials are included in the pool of test questions.
It is further recommended to assess the quality of participants' responses across different platforms, such as Amazon Mechanical Turk, Prolific, and others <xref target="CROWDSOURCED-MUSHRA"/>. Each platform has a unique set of filters that can be used to recruit a specific participant pool. The platform and any filters used should always be reported along with test results, as absolute results may depend on those settings and may differ considerably between platforms.</t>
        </section>
      </section>
    </section>
    <section anchor="proposed-crowdsourced-listening-test-battery">
      <name>Proposed Crowdsourced Listening Test Battery</name>
      <t>In the literature, evaluations of speech codec quality often focus solely on clean conditions. 
However, given the wide range of potential applications for modern speech codecs, 
and the unique ways in which ML codecs may be affected by various types of real-world distortions,
it is important to assess their limitations under representative real-world scenarios, 
including challenging listening conditions.</t>
      <t>In addition to clean speech data, the proposed test battery considers performance evaluation on overlapping speech, reverberant and noisy speech, speaker consistency and phoneme-level intelligibility. The current version comprises predominantly English test vectors, but the extension to include multiple languages is desirable.
Some of the modules of the test battery outlined below for assessment of standalone ML codec performance can also be used (where applicable), for assessing the performance of redundancy schemes under packet loss conditions (e.g., Opus+DRED).</t>
      <t>The proposed test vectors are publicly available at a sampling rate of 24 kHz at <eref target="https://github.com/cisco/multilingual-speech-testing/tree/main/LRAC-2025-test-data/blind-test-set/track_1">https://github.com/cisco/multilingual-speech-testing/tree/main/LRAC-2025-test-data/blind-test-set/track_1</eref>.</t>
      <section anchor="speech-quality-evaluation">
        <name>Speech Quality Evaluation</name>
        <section anchor="clean-speech-test-vectors">
          <name>Clean Speech Test Vectors</name>
          <t>By employing the MUSHRA--1S approach and utilizing high-quality clean speech data, the system under test is evaluated with respect to the overall quality. The reference allows the listener to assess also the correctness of the linguistic content as well as the preservation of the speaker characteristics. In this test, the quality of each codec or extension is assessed in standalone mode. The diverse test set comprises 100 gender-balanced clean speech files, covering 100 unique speakers, and includes samples from both adult and children's speech. Furthermore, the set of test vectors covers a diverse range of accents of English.</t>
        </section>
        <section anchor="real-world-degradation-test-vectors">
          <name>Real-World Degradation Test Vectors</name>
          <t>As speech codecs may be used by a wide variety of applications, it cannot be ensured that the audio to be compressed constitutes clean speech in the sense of dry and noise-free high-quality audio. It is therefore important to assess the codec's resilience to real-world degradation. 
For tests where test vectors have impaired quality, DCR offers an effective way to measure the severity of any additional degradation introduced by the codec. 
The test data consists of 90 crowdsourced speech files in mildly impaired real-world scenarios of noise and reverberation. Of these, 45 files are predominantly focussed on reverberant speech and 45 on speech in noise. The reverberation and noise levels are mild to moderate.</t>
        </section>
        <section anchor="simultaneous-talker-test-vectors">
          <name>Simultaneous Talker Test Vectors</name>
          <t>Most application purposes rely on the codec's capability of preserving simultaneously occurring speech from multiple talkers. However, in practice, this can be a challenging task. A listening test using the DCR methodology offers insights into whether the presence of overlapping speech leads to degradation, which may occur in the form of artifacts or speech suppression. The proposed test set consists of 20 files of conversations between two to three talkers.</t>
        </section>
        <section anchor="packet-loss-scenarios">
          <name>Packet Loss Scenarios</name>
          <t>Real-world packet loss traces and/or simulated loss patterns (including using the packet loss simulator provided by the working group in Opus) can be utilized to evaluate the overall quality of redundancy codecs, such as Opus and DRED working together.</t>
          <t>Details TBD.</t>
        </section>
      </section>
      <section anchor="speech-intelligibility-evaluation">
        <name>Speech Intelligibility Evaluation</name>
        <section anchor="clean-speech-test-vectors-1">
          <name>Clean Speech Test Vectors</name>
          <t>The DRT for evaluating speech intelligibility, adapted for crowdsourced participants <xref target="CROWDSOURCED-DRT"/>, is proposed to be performed on a subset of the stimuli provided in <xref target="DRT-REPO"/>. The subset consists of two test vectors, one male and one female talker sample, for each word pair in the standard DRT word list for English <xref target="ITU-T.P807"/>. Test vectors for four other languages are also available in the same collection. 
Due to listeners' perceptual sensitivity to the subtle and highly localized cues that distinguish the two target phonemes, this test is primarily applicable in the evaluation of standalone codecs, with limited expected utility when combined with packet losses and redundancy schemes.</t>
        </section>
        <section anchor="noisy-test-vectors">
          <name>Noisy Test Vectors</name>
          <t>In order to evaluate a codec's resilience to noise in terms of speech intelligibility, the proposed evaluation battery for ML codecs contains noisy counterparts for the clean speech test vectors described in the previous paragraph. Speech-shaped noise (SSN) is used as a stationary additive masker in which intelligibility can be evaluated. While the presence of noise may lead to particularly severe codec distortion in some models, even the presence of well-preserved noise can help to distinguish the intelligibility of high-quality models that demonstrate a ceiling effect in clean conditions. The use of stationary noise is essential for the DRT to ensure uniform effects on the short-term localized perceptual cues. For the same reason, the noisy version of the test is also geared towards the evaluation of standalone codecs. 
The SSN was generated based on long-term-averaged short-term spectra of a publicly available clean speech data set <xref target="DEMIRSAHIN2020"/>. 
The average spectrum was used as a filter that was convolved with white noise, resulting in SSN.</t>
        </section>
      </section>
      <section anchor="example-results">
        <name>Example Results</name>
        <t>The results shown in Table 1 below were obtained by using test methodology described above. Subjective tests were run on the Prolific crowdsourcing platform. The participants were required to be native speakers of English, with an approval rate of at least 98% and at least 110 previous submissions. Only participants without any self-reported hearing impairments and without a cochlear implant were invited to participate. Additionally, diagnostic rhyme test studies were only open to participants who self-reported not to have have dyslexia.</t>
        <table>
          <thead>
            <tr>
              <th align="left">Codec</th>
              <th align="center">Quality in Clean Speech (MUSHRA) [95% CI]</th>
              <th align="center">Intelligibility in Clean Speech (DRT Score) [95% CI]</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Clean input</td>
              <td align="center">98.3 [+/- 0.2]</td>
              <td align="center">94.9 [+/- 1.3]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9kbps NOLACE</td>
              <td align="center">85.4 [+/- 1.7]</td>
              <td align="center">90.0 [+/- 2.0]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9kbps LACE</td>
              <td align="center">70.2 [+/- 2.0]</td>
              <td align="center">90.6 [+/- 1.8]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9kbps</td>
              <td align="center">56.2 [+/- 2.3]</td>
              <td align="center">89.0 [+/- 2.0]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 6kbps</td>
              <td align="center">24.0 [+/- 0.7]</td>
              <td align="center">86.3 [+/- 2.4]</td>
            </tr>
            <tr>
              <td align="left">DRED SA 2kbps</td>
              <td align="center">64.9 [+/- 2.3]</td>
              <td align="center">88.4 [+/- 2.4]</td>
            </tr>
            <tr>
              <td align="left">DRED SA 1kbps</td>
              <td align="center">52.0 [+/- 2.4]</td>
              <td align="center">84.5 [+/- 2.8]</td>
            </tr>
            <tr>
              <td align="left">DRED SA 0.5kbps</td>
              <td align="center">20.7 [+/- 2.2]</td>
              <td align="center">71.7 [+/- 3.8]</td>
            </tr>
            <tr>
              <td align="left">Candidate 1 SA 2kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
            <tr>
              <td align="left">Candidate 1 SA 1kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
            <tr>
              <td align="left">Candidate 1 SA 0.5kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
            <tr>
              <td align="left">Candidate 2 SA 2kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
            <tr>
              <td align="left">Candidate 2 SA 1kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
            <tr>
              <td align="left">Candidate 2 SA 0.5kbps</td>
              <td align="center">TBD</td>
              <td align="center">TBD</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="objective-evaluation">
      <name>Objective Evaluation</name>
      <t>Objective metrics are often used during the development of speech codecs, 
with expert evaluations conducted towards the end of the development lifecycle. 
While effective for traditional DSP-based codecs, 
traditional well-established reference-based metrics, 
such as PESQ <xref target="ITU-T.P862"/>, often fail to accurately evaluate generative methods.
For instance, PESQ has been empirically shown to have an underestimation bias 
for generative models which may have high output quality but for which 
the output may also considerably differ from the reference <xref target="CROWDSOURCED-MUSHRA"/>.</t>
      <t>At present, the research into alternative metrics is flourishing 
with various innovative methods being proposed,<br/>
such as non-intrusive DNN-based metrics (e.g, <xref target="UTMOS"/>), 
metrics with non-matched references (e.g., <xref target="SCOREQ"/>), 
or composite score types of metrics (e.g., <xref target="UNI-VERSA"/>). 
While recent correlation investigations, e.g., <xref target="URGENT2025"/>, are promising, 
it is too early to include such metrics in this proposal, 
as it is yet to be seen which metrics can demonstrate both good accuracy and generalization 
to a variety of generative models and test vectors. 
Further insights in this area are of potential value for rapid, 
accessible and inexpensive evaluation of ML codecs. 
Hence, we propose to investigate which objective metrics are effective 
predictors of listener responses for the test battery components, 
and under which conditions.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC6716">
          <front>
            <title>Definition of the Opus Audio Codec</title>
            <author fullname="JM. Valin" initials="JM." surname="Valin"/>
            <author fullname="K. Vos" initials="K." surname="Vos"/>
            <author fullname="T. Terriberry" initials="T." surname="Terriberry"/>
            <date month="September" year="2012"/>
            <abstract>
              <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances. It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s. Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6716"/>
          <seriesInfo name="DOI" value="10.17487/RFC6716"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="ITU-T.P800">
          <front>
            <title>Methods for subjective determination of transmission quality</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="1996" month="August"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.800"/>
        </reference>
        <reference anchor="ITU-R.BS1534-3">
          <front>
            <title>Method for the subjective assessment of intermediate quality level of audio systems</title>
            <author>
              <organization>ITU-R</organization>
            </author>
            <date year="2015" month="October"/>
          </front>
          <seriesInfo name="ITU-R" value="Recommendation BS.1534-3"/>
        </reference>
        <reference anchor="ITU-T.P807">
          <front>
            <title>Subjective test methodology for assessing speech intelligibility</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2016" month="February"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.807"/>
        </reference>
        <reference anchor="ITU-T.P808">
          <front>
            <title>Subjective evaluation of speech quality with a crowdsourcing approach</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2021" month="June"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.808"/>
        </reference>
        <reference anchor="ITU-T.P862" target="https://www.itu.int/rec/T-REC-P.862">
          <front>
            <title>Perceptual evaluation of speech quality (PESQ): An objective method for end-to-end speech quality assessment of narrow-band telephone networks and speech codecs</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2001" month="February"/>
          </front>
        </reference>
        <reference anchor="CROWDSOURCED-DRT" target="https://ieeexplore.ieee.org/document/10447869">
          <front>
            <title>Crowdsourced Multilingual Speech Intelligibility Testing</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="K." surname="Wojcicki" fullname="K. Wojcicki">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="ICASSP" value="2024"/>
          <seriesInfo name="DOI" value="10.1109/ICASSP48485.2024.10447869"/>
        </reference>
        <reference anchor="LESCHANOWSKY2025" target="https://arxiv.org/abs/2506.01731v1">
          <front>
            <title>Benchmarking Neural Speech Codec Intelligibility with SITool</title>
            <author initials="A." surname="Leschanowsky" fullname="A. Leschanowsky">
              <organization/>
            </author>
            <author initials="K.K." surname="Lakshminarayana" fullname="K.K. Lakshminarayana">
              <organization/>
            </author>
            <author initials="A." surname="Rajasekhar" fullname="A. Rajasekhar">
              <organization/>
            </author>
            <author initials="L." surname="Behringer" fullname="L. Behringer">
              <organization/>
            </author>
            <author initials="I." surname="Kilinc" fullname="I. Kilinc">
              <organization/>
            </author>
            <author initials="G." surname="Fuchs" fullname="G. Fuchs">
              <organization/>
            </author>
            <author initials="E.A.P." surname="Habets" fullname="E.A.P. Habets">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
          <seriesInfo name="DOI" value="10.48550/arXiv.2506.01731"/>
        </reference>
        <reference anchor="CROWDSOURCED-MUSHRA" target="https://arxiv.org/abs/2506.00950">
          <front>
            <title>Crowdsourcing MUSHRA Tests in the Age of Generative Speech Technologies: A Comparative Analysis of Subjective and Objective Testing Methods</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="C." surname="Moradi" fullname="C. Moradi">
              <organization/>
            </author>
            <author initials="I." surname="Balic" fullname="I. Balic">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
        </reference>
        <reference anchor="COOPER2023" target="https://www.isca-archive.org/interspeech_2023/cooper23_interspeech.pdf">
          <front>
            <title>Investigating Range-Equalizing Bias in Mean Opinion Score Ratings of Synthesized Speech</title>
            <author initials="E." surname="Cooper" fullname="E. Cooper">
              <organization/>
            </author>
            <author initials="J." surname="Yamagishi" fullname="J. Yamagishi">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2023"/>
          <seriesInfo name="pages" value="1104--1108"/>
        </reference>
        <reference anchor="DRT-REPO" target="https://github.com/cisco/multilingual-speech-testing/tree/main/speech-intelligibility-DRT">
          <front>
            <title>Multilingual Speech Testing - Speech Intelligibility DRT</title>
            <author>
              <organization>Cisco Systems</organization>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="MULLER2024" target="https://www.isca-archive.org/interspeech_2024/muller24c_interspeech.pdf">
          <front>
            <title>Speech quality evaluation of neural audio codecs</title>
            <author initials="T." surname="Muller" fullname="T. Muller">
              <organization/>
            </author>
            <author initials="S." surname="Ragot" fullname="S. Ragot">
              <organization/>
            </author>
            <author initials="L." surname="Gros" fullname="L. Gros">
              <organization/>
            </author>
            <author initials="P." surname="Philippe" fullname="P. Philippe">
              <organization/>
            </author>
            <author initials="P." surname="Scalart" fullname="P. Scalart">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2024"/>
          <seriesInfo name="pages" value="1760--1764"/>
        </reference>
        <reference anchor="URGENT2025">
          <front>
            <title>Interspeech 2025 URGENT Speech Enhancement Challenge</title>
            <author initials="K." surname="Saijo" fullname="K. Saijo">
              <organization/>
            </author>
            <author initials="W." surname="Zhang" fullname="W. Zhang">
              <organization/>
            </author>
            <author initials="S." surname="Cornell" fullname="S. Cornell">
              <organization/>
            </author>
            <author initials="R." surname="Scheibler" fullname="R. Scheibler">
              <organization/>
            </author>
            <author initials="C." surname="Li" fullname="C. Li">
              <organization/>
            </author>
            <author initials="Z." surname="Ni" fullname="Z. Ni">
              <organization/>
            </author>
            <author initials="A." surname="Kumar" fullname="A. Kumar">
              <organization/>
            </author>
            <author initials="M." surname="Sach" fullname="M. Sach">
              <organization/>
            </author>
            <author initials="Y." surname="Fu" fullname="Y. Fu">
              <organization/>
            </author>
            <author initials="W." surname="Wang" fullname="W. Wang">
              <organization/>
            </author>
            <author initials="T." surname="Fingscheidt" fullname="T. Fingscheidt">
              <organization/>
            </author>
            <author initials="S." surname="Watanabe" fullname="S. Watanabe">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
          <seriesInfo name="target" value="https://arxiv.org/abs/2505.23212"/>
        </reference>
        <reference anchor="UNI-VERSA">
          <front>
            <title>Uni-VERSA: Versatile Speech Assessment with a Unified Network</title>
            <author initials="J." surname="Shi" fullname="J. Shi">
              <organization/>
            </author>
            <author initials="H.J." surname="Shim" fullname="H.J. Shim">
              <organization/>
            </author>
            <author initials="S." surname="Watanabe" fullname="S. Watanabe">
              <organization/>
            </author>
            <date year="2025"/>
          </front>
          <seriesInfo name="DOI" value="10.48550/arXiv.2505.20741"/>
          <seriesInfo name="target" value="https://arxiv.org/abs/2505.20741"/>
        </reference>
        <reference anchor="DEMIRSAHIN2020" target="https://www.aclweb.org/anthology/2020.lrec-1.804">
          <front>
            <title>Crowdsourced high-quality UK and Ireland English Dialect speech data set.</title>
            <author initials="I." surname="Demirsahin" fullname="I. Demirsahin">
              <organization/>
            </author>
            <author initials="O." surname="Kjartansson" fullname="O. Kjartansson">
              <organization/>
            </author>
            <author initials="A." surname="Gutkin" fullname="A. Gutkin">
              <organization/>
            </author>
            <author initials="C." surname="Rivera" fullname="C. Rivera">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="LREC" value="2020"/>
          <seriesInfo name="pages" value="6532--6541"/>
          <seriesInfo name="ISBN" value="979-10-95546-34-4"/>
        </reference>
        <reference anchor="SCOREQ" target="https://proceedings.neurips.cc/paper_files/paper/2024/file/bece7e02455a628b770e49fcfa791147-Paper-Conference.pdf">
          <front>
            <title>SCOREQ: Speech Quality Assessment with Contrastive Regression</title>
            <author initials="A." surname="Ragano" fullname="A. Ragano">
              <organization/>
            </author>
            <author initials="J." surname="Skoglund" fullname="J. Skoglund">
              <organization/>
            </author>
            <author initials="A." surname="Hines" fullname="A. Hines">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="NeurIPS" value="2024"/>
          <seriesInfo name="pages" value="105702--105729"/>
        </reference>
        <reference anchor="UTMOS" target="https://www.isca-archive.org/interspeech_2022/saeki22c_interspeech.pdf">
          <front>
            <title>UTMOS: UTokyo-SaruLab System for VoiceMOS Challenge 2022</title>
            <author initials="T." surname="Saeki" fullname="T. Saeki">
              <organization/>
            </author>
            <author initials="D." surname="Xin" fullname="D. Xin">
              <organization/>
            </author>
            <author initials="W." surname="Nakata" fullname="W. Nakata">
              <organization/>
            </author>
            <author initials="T." surname="Koriyama" fullname="T. Koriyama">
              <organization/>
            </author>
            <author initials="S." surname="Takamichi" fullname="S. Takamichi">
              <organization/>
            </author>
            <author initials="H." surname="Saruwatari" fullname="H. Saruwatari">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2022"/>
          <seriesInfo name="pages" value="4521--4525"/>
        </reference>
        <reference anchor="MUSHRA-1S" target="https://arxiv.org/abs/2509.19219">
          <front>
            <title>MUSHRA-1S: A scalable and sensitive test approach for evaluating top-tier speech processing systems</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="I." surname="Balic" fullname="I. Balic">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="Preprint" value="2025"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA71c63LbRpb+z6fAKjUVeYeESJq61tTM0JISa6LbiHI8mdRU
qgk0SUQgwMFFCuP4XfZZ9sn2O+d0Aw2QdOzdqvUPSwLRjdPn+p0L2Ov1OkVU
xPrM23vUeeG9UUWhs7U3SzPvblXm3s21d56GOvAufyl0kkdpku911HSa6Wes
uYn5Q1pqVu51AlXoeZqtz7womaWdTpgGiVriCWGmZkUv1sEi1llvGQe0tFdg
bW8qi3v9QScvp8sopwcV6xVWXV0+fuN5X3kqzlM8MUpCvdL4Lyn2ut6eDqMi
zSIV0x9X4zf4Acr3rh4ev9nrJOVyqrOzTgiKzjoBSMcJyvzMK7JSd0D/qKMy
rbDreLWKIxBOx/NUEnoPWsW9x2ip9zovafY0z9JyRedVwSJKtHetVZZEyZz5
NC7DKCUu4cJe50mvsSI863g9b1nmi0zRb2FW0A/9rOKSH9N51kkJqjzvC/b2
POHJ3nvQRLd8S2vp+lJFMa4bpv410sXMTzNeorJggY8WRbHKzw4O6E66FD1r
3952QBcOpln6kusDs8fBXqejymKRZnQU7ON5szKORZR716rMFEhlWe7xp9hH
JdGvfLoz7zzKg9SbrPNCL3P+XBsaY6MBfw3oFj9Il7I+SMukILXZe5dEhQ69
73DCkD/dePx3ahnF3vv05yAKnqIve/4TrX1Jf979/HGZF5mKIwUWJGm2xJbP
LKqHb86Hg8Gp+fVkcDwyvx4dD47OOh1SeOf2q8d3vUf//qTfP+MnWEu70WBr
mLOAoe4/64BWeKGGDSyjhE/gpTOoqUpyYwzev0tQVKyF1kownjn5mTyML7C6
e4PT06Ne/4Sv5DqLdE7U2SV89xnUHMdfwpzkmfc+aDWEP/hvJoPD16Pe623E
M+3FQrv0qzzXeY7dCiI+Sug0sE9QY4n3Yv2sY/pUsV7nIp5PnunBOdOwPzjs
DfqfONPDxpneTHw5hiuQ4+aZJvUhyB15Sz5jGqdz8YRyMjK4fKWhvXy4OI7m
0TT6IqHgABDK8IuFcuwSf7KT+Nq9EI8NrZb3L1Gx8JQXwMrDPC2zgM6jVqss
heP5ghMMB73+0Ref4MQ5wdGweYJ7nQV6VYDOT59g//5y8vdXZ94YH1ZnXtb6
iAf2irSHH+2VTdVMVAYmIOjgRshRrxYp/G6iC3L14v/NevaEW/Wzt4s9/cEu
ARcqm+vizLOe+OXlxY+K0ocyHWRwuI+9h8vzHph1ROvPH+7eX0zu3j2cX170
Lh4emyw7r6QIR3lTxgX0MJkTBydC+VVTQz0K0SaGbB5FvOq1bx1664Pv/MrV
7pL7+XgyuWftGJlLF3dXcEJ9fzDonx7I56OT0cmhT/f4g/5odHxydLqVMZHW
+pdVnGYUobTmCAUUUZL8DpyV15eT87fj27v3k+9+wK6HTRa90UmwWCoJlLca
8apijiCaNovYQCZXj2kaf4pNY2JTHixUgnD5tN7gFbh1rZ7yBXnyTK1VojY3
eFA/q1w/LVSb1ZDBG73IQPKGFK58RERIOWhd/9b3vimDRd66fOmP/Xvfe6um
ush3ie328fJhcn95ef6WZXfYkh3EddgHOPhH9OwPD/tHfn9w/HqwVWYq+wU3
MZaY5gf1zc+Dti7fvJu8fRjvUmcSltzBKpvD1XKcGc81me63OtEZB1gry0f8
l5CrxtnAW8h2uVLmlnGi4nUe5bTScZNk4HfVX8YyPBOV/1cWcu57N2mmwmhT
Zm/ggYIv4v/nsLZ/ekhh8Pzu7v7yAStbQfoqeaZTzRWf7EFBnXqX7Ax/pQtv
IsWMvdEqAcyPEvK3kwAGh3tpiXBsnYDzefQrXIww+1O8ufTB+nS1wZq/+d4P
aqnmUb7Y6T1abHhtrq/UnGQKDzLq9fD/yW43mgeqZ1EtcYrRhzjxn2jHg4Bp
G77+yfnEX4UzbAnnCs97f9fCOVucqlWV3i4vi612RtJNRNo+yRwOqJwSIj1g
bHqwdGjoCc2cL+HKQZFpTWA+OTAftDAJxQw85ebd9TVryKiFGpoBshl3E/GW
AtK2xECvJeNHn2LQpllMyNPN02LTjJC4tB0WnNX9AqSvVnrzk0mgkLUUn6lA
o5YCHR/1oUDHR6MG2/e+RINGJAyccDgK2ipErHn38O3l7eNmELqqb2UDNzda
/blMEEYCzcDkfKHwAFjqJ1mN6DJR0c9p6/J73/sntppvCuA8zRLoReuDB2Lp
QkfTrb7suu3H/ul7t+1rCGPflcuNCHZDBAaL1tUfKEht0vx+k2To0jfkgYi6
sK05E1pSIKRO9ReGtN91qgAmr4cDQl7vbq9632OTVoxCVmoue99DpjCWuApC
4xpfGpiNu2cRHOetoMpPyhQucrJos/etL5eXv8OCCpYf7uDIjmBOQOx4NPgC
9pjbLy5vrsCFt1e3eGj/E7B0Ec0XPetg3n3HYfcq0zH9vEzmMSKCdxGpGHHY
4m0cRuEIhf9JfiGqXuhlBBksoqT12R208md4CmTOedr+ECr7bVk8bSyCyj/A
7jO1g4XXAObM437TsewdHb4e9npHh6PBntW/yZvbM+/0+BR5au/08HB01EPy
2XQ8rt9RQfyip8JphFvOOQ/oSX6MnKA3QOpEiyfndw+Xf2+5cLlmdfDvhtNt
XTxPkyJTOWOdBz3PNNcTfgfhwm0D327R06d0HpdJuLnibZToXTiT0PfV/WSr
b+4fHvfBQ/o53J4OIEMNtKYKWO5TZIpWuR8EByuFgP7TDFaYy+8H7KbpwsFU
B/pY48/DQ3U0PJkeH/f16HQWzNTx6WAwOu7d04IeWDPTGfIEbaDAu8ebu0nL
7PkSPkmf1mlvorLyWk1NGOes8/s0CjTuqR04HXP4KQY/ko/UT22Tv/C9f2zo
JrzkrXqCXWzu8V2aRWtgq00P8aioyhVscSoeneAF22Wfi8aGTYGNDoeDXg//
bwernxNLhwc5nX443AilDFgI/PcGLTHUl4Hxc0IDCF2SqlNZuq7d2IqGVAQM
rAFmK9JVr4h0Zl0Na5Up6uwuQ/0u8v8dhH+f6RWyueJL4P2pPzgdDk47vu93
Or1ez8N1WHBQdDqPCyQzNhWmI6xSGHujXkUsYTfqnl9g3dJUmGNbYd6/uX4l
+M7TVXm/28mRTnrIECjvCrVeGSCY6RCWD7yy9vYvHi4vXnXZwZgEjbsFste+
qYi+Av18gGUUhrHudL4iwJylYRlwCRzH0VLf36CMq24WfjKppiz9ynsxtW+u
m3sqWoLQ1OtQaTGDinpBmWXEHWLErCxKJDXpapVmRZlATcCtFVwgbkCEmq6p
vyEA1+tgFy14rH2gDx/MiT5+5G0j5KUOw7wOsvK4DEXNvGgJwUAd6xoSfVCV
orA+S6dlXsBfMukrFTzpwovTPPe9zuVstrWa1xaTt2/FJLIADd40hb/PEftC
FVNJi59VSw2BlZM73JtzJbAMEH69GVQL7MZ6koJ+5mMsoFh1lS33O1cI0iXz
MfdIqUmKNu1hVfslkuTIIbGzP+ONiaRAd4nfGsCFMhoPh4vC6nQvC6gwjFNh
b+h6xEqwilY6JrVYqgJwMPemKjcXwIzYgwvnejv2ftWhs/LmdJ9b+sHuRg2M
YMh2cP65IsJIG56jtMxj0AY6pwRLSDXMo3Dyztv0hZSrS8xOy4Iuh8SuvJzm
+t8lKVuuWaVJF/QOAc7r0gWUjk9Q6SJYlcCmodFQORtGpBKZaHZTBWBeTipF
IuOU3Km88+HTbWVg8QykVztch1sQbm7K3tQ05rodUqi5AbdAVgGV9UO5h7wN
VAtnhzKrMMxYr2FBMDhwBxL9dxllhu0dtuwARkp9AVFQq0yxSzqXICrr9NlX
GKrDBm24A8LJjfK36r5krM28uGmttsQUwPMkXS9Joxx3CFnYdQqJgVtBmoSR
la/sGaQZbIGFklGvEF4pDo3LIoYwx3XDM1sPlTdZDoUrSDVyuJl20Rf+RtwS
R7tMraJQYl4VACuwR6S9LCgnCQjwpXEsdqxhaVmkjKU6smNZJmnSw049UvcQ
1Kq8NJKC3l+JZwfRu/QXz8y0gVBmj0ZlPovgWOFfNOkFcxD+Jo6WpM6GnUIz
0QFBZWVe1/Sh7rgwn+NjplUIiE2Lr+tpf+53wbQ69zbuucVHgQ4fP0Ik59QD
ZouNIWbW0ZzrigtkskktCtBFH85TyFT8acYWzzr2CeVy5EF6xjqcpAWLCt4n
zdgRijI7/EyUhClH4bnfTWtiClnFQhU7ek9eJ0hLkDlFwInmlHUGio5nvR0r
zAGMLgT2V6FEPQkd8RwAslgsyT18wy5c4Xnw08KNlbRlyI1wpzJdwgKpVYKf
pslPtFWPN8EzdBVxVkokWCIXLOimyAAALS2blLx+M4KS+4M3SGovwNaWUZpo
Ho/o+6wRrgrOMT2yfywSgiM6shGR35mUHIFrOtSaBMLkqjwCm6jrGhTi0K1k
2S2AKNo9Jsq5VcbOmgFjog35fAP2iNQ8SRH+Akthz05BYBfGYSDXfsRN0O4n
VKfy2oKQtoQZGzdYhR0f0/WeNDlExN+Qlat2WbMy4wi7TKFyEhyAC3QWr62f
2OpdabW4dHtbKz41vHsI6F/mtIeaQiqE/776yrtms6L1PGNiCu300VcW7gPY
d+pfYcEV3odNM1phLyb9O6LiBbzruQHbNA32uWYKLYbdRvg9Et+xAAKFCCpn
xaRDWwH1X32i3WsV4sOHZkuciDJBg13X9mBBhFdM5Ya5qR/Tw9sYpQ519ihY
PYuynGxZhWrF8TZtBewuP17/AtUvKvfF4IXRSqjzIIumot073eK9yqC6Zayg
DEyn45qsn4dFw5LIfAJFPrNppV0Gbs1AEFdSZ31iXtAt7AybXhtZqSa2kVMm
3CoJmehntVqkQqoFbI+8cKm7tjlEEQ+wkwcLwLNfdLhV0siLFxSJniEmjsOk
S0YkzmOr/T2VaQNkOGZhWwa3oY2mZr0rcicjssqTUfuFaWMqjNUY8sC55zRi
x2dPIQCSqWP3uGP7HPknP97ZIoxmM5Ik5zlurF9owoLZnG6CMeMOfA5FAlKr
DMZ4PcYXHHhmlWsAwHuqRj5WrC7RSpHFE4EaEaUioVLCGVRjXuKAK1e9omSb
xkLpxzlHpa7jEUjqoITmlVjzp3kal0UVfIizLLv98fnDK0u+KCpiIPn4Mhej
AVwiAghxEsgBHjdNr5ybXvs3d5NXRpsg/agwT16ZEoMJFVb4sQa5hSME0UVe
taT9con42CzxQFvXCwGDQQatMOppdYIYweFHoKPRGaRKaZlwqKY12xTH9x4q
g4sJcVMJio+TwzhhuVMNmYd6CbUpMmW8R15CuSCbLF0KDT1dtwWnkaKUAH6i
ait+/ChJvtHEnAGCm0JvU0g8dSYZHF1uHdVJ+RRbDuyvKAlH783TNNzj+b2p
wi9NLC/GG2bqZYp82YYBR1lYORgYBtwCtu6yoL6sgZ7Gs4rVRJKl5BzhGQkE
gPKkMbakxFgg9Z4py8iXBDHIwITplGUWL8Rjq6Imaru+Mk2w71QbkCEwEEKJ
GMwImVEOFgCAcTS8OH/gkC9YTVi1oe0XpO1VmQunsFoueRTpuLt+m75fkMKb
qCYTauSExzEOUAJQiSoX6xWwZIwT8ANMY6NR1XfcGLsJ66oJuwZqxQcmx1cF
1ObBeN/p2kJe9lK0B6URIDyKeZ6HQhApsGBaFWXs3quci4tdBBhso5OOwo7e
YZGYZV0pJMvc5U9y41A22JMwHmeoU8/o2WNzPTEkGE9+spHhmSwTfHmm4Uuj
Bc72J9jeKMDDoyhAjSizxdra1T4+btB1DCBikrZ8V34A/hpzJQEsoQfQZI/4
yBw3IS4wXQKQW4FsUXZ2izpb5hJeuQSBINOt8ky7hIi1rBQJcOs64lNwFEaq
wH9EkAPlA4Z/OJXxMqJAVIyF1VDFwZTUgphViBaEXGbiDLPj7D/TnEPlxvgo
ctvHsQpzoRl8YCNETDLFD2jgrIzxiDRmuXGCJ6U2ZYdH2mhNzmZ1fZZpLf7U
xO+ejd+0bgn/C8VyLy1UbsxI9nCcje9dFcYtwbJIZRnm5iVjD4am22sHHD0k
KrzRisaHSGqQdWiQPTgWseduKmcdsOtqiIaOp2sLGV0ddYz8hQ5hQOmGvrd3
3MSeUtqQ+Tvs9qypAsk3zsg8Y0UjD4TQXkg/V6UF+R8+2GENa+RVBYpJgl/l
3B60pStT4f3woT0nxqUANrfm3NGYDiSAtnP+qQNBGxDhChNGIyg8Fe4AtbXc
RQezLtJYCUSbVEUQiwWsLW3xK7bqwKW+RmNjQ4ibBHZbgmOdo2pARPm/0XlY
DgO2lnIw3qgYQaTAY1GlC3pskL0wgcySRqY1tSbcrNCThMdgl6Cex6JIkjhe
o6mHOnmOsjQhLvjeN426MQ63YuNnps7MpL7ojy0Ws8WHXN8Q0+GDBDxVKnZm
8kG6scYvtEdA5WXsFCFu8cdSjBATYPsh92DBPjLxnHWEa+LkAUxmndmpV4M8
OOg1MgFs4YLnrwmKr6gkVYFxiy8QTwBXqboNYdpC/3ipfk1paouGD8mveY9l
9tT17sFPcclcB+aSyu5s75JCot3deCNTqc01++RZFFNXTsBwC0njkBnckRMI
3CMxp8QwqydI/rWuduWd8gVXjVT8otZkT1QFTjMWSF1oZnabbIXT2Spi2xSG
MJa8D0KAQZoWtssh/pdukDBWVUWmjMkEu1VcZo9AnBTlbZh/q35h3nSp6qMQ
LeETRJ9uu37tjhE7WoDdYGsB+Yw0JrNIbTGhTvVAUNV3mEdUP+KyB05QI+pV
WlBYp1Et9wUWMmMGTUlzjtlUgpzSPHMfSi5Bsy4+EtsgFMVhS2yZ6i/k5egN
FD6bC8HAIEhPuocCHm21s2iaQpS55V+T4VMHgO1RSg7OxjncOT23WSG0vqYZ
vRzWsWhcZyvMdYZMursrXpWi5G6DqdHMSbjkGIPp9TsB3UbHgDs31EyoPsVP
9WTVkGgOpILcqAq2wZuYku1oUpuDgTNcKvIGycPClAaNOejZoRo3piKTLCVV
rfpy3KMUD1cn/3XIjbhsFHGdxu9MqOprcBp0qoxN2WTRbBDVfTHCbS9OGc0O
3TttSatnDf6Sn+G8wTqbfQGnRrNBDDL0ZnWORehswUpZdzsRKgnFio45TVa3
nrIvTQTq9f6R+6im5trUDItPlAUjAUGMCsxzwYhTYy6vUJkBpAxH3tPbX+nD
H/+PA57XD+PzHgEXeVmO9PeAmqih/A2Hd0DzAU8/Df61///2qFecsbSHjy7r
F9wEYLHhOeOz3vfCy86btUGaVpJOKl9lbmQgJZHN9YlG6rnDpLeWDo3t2iSW
oi4l4AaF2fZBo9Dg1A9jKHRuPL2B27VPY5UVnMMtCm7qGwNx0hPODcgv5Azp
7XQF+73suZFNVo5ioQhZ6Yx3yBlAFtTElepFC1doVcUZ7uRYW3cLp9Syrq2Q
IoQpqtDsW65tOlI4DmbQ71M1GMwE9ou5vdPkPM9fdaUHSzKiBRZNyEFMSdF4
nLyqITF45nxLwa2IywwWyPnB9q9tNktjo4yuKIE3EhaQ0jBM0wJW1VHqmmsQ
cGcCvxr3aDJtfsPzPUeZC6co0dDRcd4MoDYu2rqFknhMoVGLGNw4zCVBuDXb
b0ooTQ8FVTE45YpfwR6POS5ScktiDVbbMhs24qOF2bqKM7rHeWjDQnj/zZRy
R2SWI35NOVQOg2PVZ7xXR/maTaZhKImAKSK48uACZFWsqYqaF+cPIHzGokq8
elACMISeZSoZ5pSkUIapydrtH7s1JKedWOUbOIbpvtfJoYm6rAen/dbMg6PJ
kjbH4Y5aU4VHpLELvjdnBYQ3d2zI1IIZHZptTdLhBGvGf7lUulzkYKihbbE6
daXPD7TuyXlirQXygqU8jo7BXDXlM5vzTigNgvw1oblHFZOvaWj9TSrZplVk
hLxM6iWZwaquunCZr2qHGIfGsMh5Di2jmY/MeYeS7b9uPjAh8HIV6MWBV+QA
I6kuRrlNRVQDAFJrwvfG7XZTWSEEUjq3w2cU0NSg6Bcwyc4fVU7Z4IlNnEcz
aiFnv44eNmq+dFBrrJwBkQoDHdOcVU7e2WxE41RmGtekTA3QIY64Vtth3+gS
D4MlzzKDTiDGJjPFSyoxjVxBxVCR+r3gn2vCPxOrxZ2HWrVdgEQhXtq7NDjA
guToyR+uGPERdqoRec1tdxuzEFtUtQdjpM0ZvihhAPaqyjU54ku+aUP3tkjd
Anw2ybHpMg/wkWkQsKseWaRzFjUx5kIXwHCwgjcXDTTTfq3n81HNo5Qz28On
20uz3d0ltEaPbVvxrNtoLksQMVhYfIqSuYGqyGrLH5UouCrXLqeZNa7esVY1
MgpGD8r0xuiPmeY/ReVMgBewzrCEvg+BC85VCLMjNcQq/pSslxfYHKZZ5PaF
yza80I0zcMp0perchYtAhMgalfaqORVQ3SkwAexCenJVC/JrO/BCFaNWB8i8
6V6YM5uxjzgNzCxeUNohHSlPE+xbSJJE3OOJ36qu3q1hnAgxWsIaKaOoUh1L
det16Bq8WV1nPFs3p00vkQ2I3milhgqgxZQzM77Xsc9qgKOdMpFpsJ7fcgbb
UHCAUMhLIHBlm2oHeJCY5PYPdhlCIxt3Tj11vo+kLk8QmqbpTZNimzIk2Uxd
w2xApwY4aQxFGIfPI6BkdQo+fQXYKcbdyxdqpW1w3Z9Mbl9VDTcunOVSyVCZ
RSg0MoGIJG0TCQrthozxclVS4nvveVyqHXvkoRRSKOTIpLDTSWeEZDTBqcAw
yqek3U5uaFs7cvfmvoIJ1tX5iLCFjlcc3Fp63D4Ezcy5YNPMs4oN1E1nUg3N
aaeBe/XcjFvqeqwn8RyOGvVB/pbnps5lpUuOgwcjGTAi4eBAW01OGJtfgCU9
0j3HVB0jJ6uVUnPlIagBTOGcLohy2aKLW/SwLc65Nm3mFziz/HMs1gBTKBL3
K8ygjZk3Zr9N5U+muadkXDR0z8GJq8xjqm2liI2smGEE/HzjpS1ugHCL1MzM
y7blkomqtVtKtiJU+oRgRxo/W18C9S6ETbprSrKm84TzGSfiXcpQIfItLtl2
BL1K+RYHe2GFfWTiB6Z4xE2fdEomLpDBwIv2V3PUhizTZl7rezxM+ygrE6sR
tlLeaj3YKrCBYW7slR1kxs3G2UTqlDa/dXJL45CplEUlDOhCVQ4CCyEbnOD0
5A9SE7cXBoN+7YLqrz+CrtzR+ECTnIha9AWnRLmOZ72qbr6wfT9OW8yMexLW
KyA9eoFFZdxtolSDzxYlz5EZDqmeRMnCuEq4aCprez86L6jtaJjEsw7pSieN
rZjoRdoillJi3MVpIv8XrvNY/xIpwLDfzNcz/FYVlaAgDcS1L6WiV96Pp4d/
8M6v/oV725BtYw15DH7F3V3W+e2sZ/85v/bOmn8QTbxZlKzAyt8gQ/+19+Mf
D3pe3x/S409H/qlcGPiveWOBn88D/9AfeqdP01Xu3d5dj88vcffJoT+ydx/z
8r7flwtDv79ruVl8jEe699LiI7vbya7Fv3mHR/U6ItE7Of3kQ4/MuuHI3tYX
Yk+O7OGH/kjWMcKejL2hWXNUscM+68SeeGPNwNI3rMkZ8ZqRf2gvnDTX9P1D
Sx2IsjexJI4H9sJru+ocdkDvmpCHcYgE6rf/b940+JybajJ23Tb8nAcOP+eB
w90PJEfrfKGFm63cbUzjKx40pw4U+/mwzGzqFlLtIF1VBftW64g9m5nbc9tc
FMfLoGjHQUoNZhv7wvvqYB3QBF1HUE9dBeLY7gxrXUzuTcO5osH9eGMCuD2A
ak6MZTYjpO8PcjKLoyHlUaYdhwjKBTHzakq8ruGtOw5rJ7ObDWremFqpPHSn
l6soM3MnEuKsr1Nm6Jrq7kuDcOm7ODpbp26dooJ4Sh52LwvyQRZ3UXeHFsut
/NaCuYOWySCW2/g0zVAuvhSNOvfutyY648J2zLtmUa7p/U6pnSgCCYlqaBm1
xWOEV/rWD6iXKI9tIUZJkj43+Am+ySiCZABdr5ZZ882Qi9vbpnS5g8NvgdB7
uh8/voK47UfVGy7y/pijIVXj58MHeYtaFlIWni5BAuEaGZWr2p3u8+S1E/s1
AVhbabMMoEgzILYlyuq7WLgovO21la4pEKaI/WAEdTulapsC4zLad3p2zJmK
0aYpIKxTMfV4czPAt9aFQSw5qaXRJrOQwL4L1LkSTxOYxgRMe7L50g2/jaTc
gvem0raneqhWbCYknJqbUE1fv2g8ktPMJrsTd8BvPNGRAn4Jzr7yC1wIP5Tk
my/ZOVP0nbcymPhSZZbCRCsNbRiy+cISEVS7pQ7VbSNJHWkaxvaA6tkNm5S0
ushQpMS8mMV9LG5JmfFkt1cN531O9byk/vrJCx6glUkkRsz0WgcVSnJ+E/qR
vu2SfgJS8O/Q4HdXiI30++Tt+Pq6+qVj7pi8vXt3fVH/Vq88v7u5uby9kMW4
6jUudfZuxj/sSQtn7+7+8erudny9VwmweqGQR+VT84aPTNpxFtFpZNtvzu//
+78GI2j/f5jvUvz40fxB36aIP6hkYeZYCE/Kn2DuugNEzeCV2sQxlZ2jQsUy
FyJOltoQYOd//kic+deZ96dpsBqM/mwu0IEbFy3PGheZZ5tXNhYLE7dc2vKY
ipuN6y1ON+kd/9D42/Ldufinv/CLsL3ByV/+LABgomG2FBLOG6//QH/eXPAN
V+Pb8eaHDSku2OHKnSqoFJTe4aap687/AHFauF8wVgAA

-->

</rfc>
