Skip to content

Conversation

@jerryzh168
Copy link
Contributor

Summary:
att, this is a prototype feature until we see wider adoption. only per tensor and per row for both activation and weight is supported.

Test Plan:
python test/prototype/test_float8_static.py

Reviewers:

Subscribers:

Tasks:

Tags:

Summary:
att, this is a prototype feature until we see wider adoption. only per tensor and per row
for both activation and weight is supported.

Test Plan:
python test/prototype/test_float8_static.py

Reviewers:

Subscribers:

Tasks:

Tags:
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3509

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit f96600a with merge base 1f9bfd7 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 18, 2025
hp_value_ub: Optional[float] = None,
kernel_preference: KernelPreference = KernelPreference.AUTO,
act_quant_kwargs: Optional[QuantizeTensorToFloat8Kwargs] = None,
scale: Optional[torch.Tensor] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need both scale and act_scale here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants