⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@mergennachin
Copy link
Contributor

No description provided.

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 7, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16499

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 1 Unrelated Failure

As of commit fc45eed with merge base 55fe42b (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 7, 2026
@github-actions
Copy link

github-actions bot commented Jan 7, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@mergennachin mergennachin force-pushed the parakeet_metal branch 8 times, most recently from f39750d to 385acb3 Compare January 8, 2026 22:29
@mergennachin
Copy link
Contributor Author

mergennachin commented Jan 8, 2026

@JacobSzwejbka @manuelcandales @Gasoonjia

Here's the latest update:

  1. Export is good.
  2. Runtime is failing with https://gist.github.com/mergennachin/a8b5dbcb923a17ac1f84c0dda27bad77
  3. aoti_torch_mps_addmm_out implementation was missing too on top of aoti_torch_mps_bmm_out
  4. There was a bug with convolution op -- fixed in this PR
  5. Fix MPS constant blob offset indexing in AOTI runtime pytorch#171998 seems like a false alarm - closing the PR for now.

@mergennachin
Copy link
Contributor Author

Here's where the stacktrace during assertion

https://gist.github.com/mergennachin/8a43299f4e74c16da3dd2946275737cf

@manuelcandales
Copy link
Contributor

I have been looking at the inductor generated code, and this is the line from the generated code that causes the error:

AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_mps_addmm_out(buf12, wrap_with_raii_handle_if_needed(reinterpret_tensor_wrapper(pre_encode_out_bias, 2, int_array_10, int_array_12, 0LL)), wrap_with_raii_handle_if_needed(reinterpret_tensor_wrapper(buf11, 2, int_array_13, int_array_14, 0LL)), wrap_with_raii_handle_if_needed(reinterpret_tensor_wrapper(pre_encode_out_weight, 2, int_array_15, int_array_16, 0LL)), 1LL, 1LL));

The error is produced by that first call to reinterpret_tensor_wrapper:

reinterpret_tensor_wrapper(pre_encode_out_bias, 2, int_array_10, int_array_12, 0LL))

pre_encode_out_bias contains 1024 elements
int_array_10 = {20, 1024} (these are the sizes)
int_array_12 = {0, 1} (these are the strides)

reinterpret_tensor_wrapper turns into a call to aoti_torch__reinterpret_tensor, which calls executorch::extension::from_blob which calls make_tensor_ptr

So, this turns into a call to make_tensor_ptr with data containing 1024 elements, sizes = {20, 1024} and strides = {0, 1}

@mergennachin
Copy link
Contributor Author

Closing in lieu of #16562

manuelcandales added a commit that referenced this pull request Jan 14, 2026
This pull request builds on top of #16499 to introduce support for the
Parakeet model in the Metal backend. The most important changes are
grouped below:

### Parakeet export/lowering:

* Added support for Metal lowering to the Parakeet export/lowering
script
* Provided custom linear decomposition. This achieved two objectives:
avoided call to addmm, and avoided call to reinterpret_tensor_wrapper
with 0 stride

### Operator updates:

* Added implementation for `aoti_torch_mps_bmm_out` to support batched
matrix multiplication (bmm) in the Metal backend
* Fixed input channel dimension handling for grouped convolutions in
`aoti_torch_mps_convolution` by reading the correct dimension from the
weight tensor.

### Shim layer updates:

* Added implementation for `aoti_torch_new_tensor_handle`
* Enabled non-zero tensor storage offsets in
`aoti_torch__reinterpret_tensor` by adjusting the data pointer instead
of rejecting non-zero offsets, and updating memory tracking and Metal
buffer mapping logic accordingly.
* Added the `metal_buffer_nocopy` function to map arbitrary memory
pointers into Metal buffers, supporting cases where the data pointer is
offset.
* Improved error messages in several stubbed shim functions by including
the function name in the exception message for easier debugging.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants