# Automatic Speech Recognition

### faster-whisper-large-v3-ca-3catparla

* Model: [projecte-aina/faster-whisper-large-v3-ca-3catparla](https://huggingface.co/projecte-aina/faster-whisper-large-v3-ca-3catparla)&#x20;
* Inference: <https://l9w4uzm374uyn9xk.us-east-1.aws.endpoints.huggingface.cloud>&#x20;
* GPU: T4
* How to use it: [Notebook for using Faster Whisper](https://colab.research.google.com/drive/1v_3m1aR9CwYXgPVBlhwDI9Hz4V5Dlh95?usp=sharing)

### STT Ca Citrinet 512

* Model: [projecte-aina/stt-ca-citrinet-512](https://huggingface.co/projecte-aina/stt-ca-citrinet-512)&#x20;
* Inference: <https://h3xisjmpemyv68l1.us-east-1.aws.endpoints.huggingface.cloud>&#x20;
* GPU: T4

### whisper-large-v3-ca-3catparla

* Model: [whisper-large-v3-ca-3cat parla](https://huggingface.co/projecte-aina/whisper-large-v3-ca-3catparla)&#x20;
* Inference: <https://ddb95svxi9vs16zy.us-east-1.aws.endpoints.huggingface.cloud>
* GPU: T4

**ASR APIs**

* How to use it: [Notebook for using whisper ](https://colab.research.google.com/drive/1MHiPrffNTwiyWeUyMQvSdSbfkef_8aJC?usp=sharing)

**whisper-large-v3-ca-3catparla endpoint**

{% code title="example.py" %}

```python
import requests

API_URL = "https://ddb95svxi9vs16zy.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
   "Accept": "application/json",
   "Authorization": "Bearer <hf_token>",
   "Content-Type": "audio/wav",
}


def query(filename):
   with open(filename, "rb") as f:
       data = f.read()
   response = requests.post(API_URL, headers=headers, data=data)
   return response.json()

output = query("sample1.wav")
```

{% endcode %}

#### faster-whisper endpoint&#x20;

Python

{% code title="example.py" %}

```python
import requests
API_URL = "https://l9w4uzm374uyn9xk.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
  "Accept": "application/json",
  "Authorization": "Bearer HF_token",
  "Content-Type": "audio/wav",
}
def query(filename):
  with open(filename, "rb") as f:
      data = f.read()
  response = requests.post(API_URL, headers=headers, data=data)
  return response.json()
output = query("sample1.wav")
```

{% endcode %}

Curl

{% code title="bash" %}

```bash
curl "https://l9w4uzm374uyn9xk.us-east-1.aws.endpoints.huggingface.cloud/" \
-X POST \
--data-binary '@sample1.flac' \
-H "Accept: application/json" \
-H "Authorization: Bearer hf_XXXXX" \
-H "Content-Type: audio/flac" \// Some code
```

{% endcode %}

**Citrinet endpoint**

Python

{% code title="example.py" %}

```python
import requests

API_URL = "https://h3xisjmpemyv68l1.us-east-1.aws.endpoints.huggingface.cloud/"
headers = {
   "Accept" : "application/json",
   "Authorization": "Bearer hf_xxxx",
   "Content-Type": "audio/wav"
}

def query(filename):
   with open(filename, "rb") as f:
       data = f.read()
   response = requests.post(API_URL, headers=headers, data=data)
   return response.json()

output = query("sample.wav")
print(output)
```

{% endcode %}

Possible issues with endpoints:

*HTTP/1.1 401 Unauthorized:* The Hugging Face token was not specified, or the token is invalid. Copy the token code and replace it where \<hf\_token> in the headers

&#x20;*HTTP/1.1 503:* The service is unavailable. Occurs when the endpoint is initializing, as it is not active all the time. Try the same request again.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://langtech-bsc.gitbook.io/aina-kit/aina-hack/automatic-speech-recognition.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
