Skip to content

Single GPU inference + HuggingFace #8

@jglaser

Description

@jglaser

Is your feature request related to a problem? Please describe.

I'd like to compare MegaMolBart against other models, such as ChemBertA. However, while the latter is available on HF for easy experimentation and for single GPUs, I do not know where to find the weights for MegaMolBart (except that there is a container, which is inconvenient to download and to use on shared computing systems). It also comes with its own dependencies (nemo/megatron)

Describe the solution you'd like

Ideally, single GPU inference would be as easy as the following few lines of code, HuggingFace style. Possibly the underlying ChemFormer model needs to be implemented in HF.

from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('nvidia/megamolbart')
tokenizer = AutoTokenizer.from_pretrained('nvidia/megamolbart')
embedding = model(**tokenizer(['c1ccccc1'])).last_hidden_state

Describe alternatives you've considered

N/A

Additional context

N/A

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions