Skip to main content

Microsoft Health Futures package to work with multi-modal health data

Project description

HI-ML Multimodal Toolbox

This toolbox provides models for working with multi-modal health data. The code is available on GitHub and Hugging Face 🤗.

Getting started

The best way to get started is by running the phrase grounding notebook. All the dependencies will be installed upon execution, so Python 3.7 and Jupyter are the only requirements to get started.

The notebook can also be run on Binder, without the need to download any code or install any libraries:

Binder

Installation

The latest version can be installed using pip:

pip install "git+https://github.com/microsoft/hi-ml.git#subdirectory=hi-ml-multimodal"

Development

For development, it is recommended to clone the repository and set up the environment using conda:

git clone https://github.com/microsoft/hi-ml.git
cd hi-ml-multimodal
make env

This will create a conda environment named multimodal and install all the dependencies to run and test the package.

You can visit the API documentation for a deeper understanding of our tools.

Examples

For zero-shot classification of images using text prompts, please refer to the example script that utilises a small subset of Open-Indiana CXR dataset for pneumonia detection in Chest X-ray images. Please note that the examples and models are not intended for deployed use cases -- commercial or otherwise -- which is currently out-of-scope.

Hugging Face 🤗

While the GitHub repository provides examples and pipelines to use our models, the weights and model cards are hosted on Hugging Face 🤗.

Credit

If you use our code or models in your research, please cite our paper (presented at the European Conference on Computer Vision (ECCV) 2022).

Boecking, B., Usuyama, N. et al. (2022). Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13696. Springer, Cham. https://doi.org/10.1007/978-3-031-20059-5_1

BibTeX

@InProceedings{10.1007/978-3-031-20059-5_1,
    author="Boecking, Benedikt
        and Usuyama, Naoto
        and Bannur, Shruthi
        and Castro, Daniel C.
        and Schwaighofer, Anton
        and Hyland, Stephanie
        and Wetscherek, Maria
        and Naumann, Tristan
        and Nori, Aditya
        and Alvarez-Valle, Javier
        and Poon, Hoifung
        and Oktay, Ozan",
    editor="Avidan, Shai
        and Brostow, Gabriel
        and Ciss{\'e}, Moustapha
        and Farinella, Giovanni Maria
        and Hassner, Tal",
    title="Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing",
    booktitle="Computer Vision -- ECCV 2022",
    year="2022",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="1--21",
    isbn="978-3-031-20059-5"
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hi-ml-multimodal-0.1.3.tar.gz (21.0 kB view hashes)

Uploaded Source

Built Distribution

hi_ml_multimodal-0.1.3-py3-none-any.whl (27.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page