Search Results for author: Dimitris Stripelis

Found 17 papers, 1 papers with code

Fox-1 Technical Report

no code implementations8 Nov 2024 Zijian Hu, Jipeng Zhang, Rui Pan, Zhaozhuo Xu, Shanshan Han, Han Jin, Alay Dilipbhai Shah, Dimitris Stripelis, Yuhang Yao, Salman Avestimehr, Chaoyang He, Tong Zhang

Aiming to improve the pre-training efficiency, Fox-1-1. 6B model introduces a novel 3-stage data curriculum across all the training data with 2K-8K sequence length.

2k 8k +1

Alopex: A Computational Framework for Enabling On-Device Function Calls with LLMs

no code implementations7 Nov 2024 Yide Ran, Zhaozhuo Xu, Yuhang Yao, Zijian Hu, Shanshan Han, Han Jin, Alay Dilipbhai Shah, Jipeng Zhang, Dimitris Stripelis, Tong Zhang, Salman Avestimehr, Chaoyang He

The rapid advancement of Large Language Models (LLMs) has led to their increased integration into mobile devices for personalized assistance, which enables LLMs to call external API functions to enhance their performance.

TensorOpera Router: A Multi-Model Router for Efficient LLM Inference

no code implementations22 Aug 2024 Dimitris Stripelis, Zijian Hu, Jipeng Zhang, Zhaozhuo Xu, Alay Dilipbhai Shah, Han Jin, Yuhang Yao, Salman Avestimehr, Chaoyang He

With the rapid growth of Large Language Models (LLMs) across various domains, numerous new LLMs have emerged, each possessing domain-specific expertise.

ScaleLLM: A Resource-Frugal LLM Serving Framework by Optimizing End-to-End Efficiency

no code implementations23 Jul 2024 Yuhang Yao, Han Jin, Alay Dilipbhai Shah, Shanshan Han, Zijian Hu, Yide Ran, Dimitris Stripelis, Zhaozhuo Xu, Salman Avestimehr, Chaoyang He

Large language models (LLMs) have surged in popularity and are extensively used in commercial applications, where the efficiency of model serving is crucial for the user experience.

TorchOpera: A Compound AI System for LLM Safety

no code implementations16 Jun 2024 Shanshan Han, Zijian Hu, Alay Dilipbhai Shah, Han Jin, Yuhang Yao, Dimitris Stripelis, Zhaozhuo Xu, Chaoyang He

We introduce TorchOpera, a compound AI system for enhancing the safety and quality of prompts and responses for Large Language Models.

MetisFL: An Embarrassingly Parallelized Controller for Scalable & Efficient Federated Learning Workflows

no code implementations1 Nov 2023 Dimitris Stripelis, Chrysovalantis Anastasiou, Patrick Toral, Armaghan Asghar, Jose Luis Ambite

The controller is responsible for managing the execution of FL workflows across learners and the learners for training and evaluating federated models over their private datasets.

Federated Learning Scheduling

Federated Learning over Harmonized Data Silos

no code implementations15 May 2023 Dimitris Stripelis, Jose Luis Ambite

Federated Learning is a distributed machine learning approach that enables geographically distributed data silos to collaboratively learn a joint machine learning model without sharing data.

Data Integration Federated Learning +2

Towards Sparsified Federated Neuroimaging Models via Weight Pruning

no code implementations24 Aug 2022 Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul Thompson, José Luis Ambite

In our experiments in centralized and federated settings on the brain age prediction task (estimating a person's age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated learning environments with highly heterogeneous data distributions.

Federated Learning

Secure & Private Federated Neuroimaging

1 code implementation11 May 2022 Dimitris Stripelis, Umang Gupta, Hamza Saleem, Nikhil Dhinagar, Tanmay Ghai, Rafael Chrysovalantis Anastasiou, Armaghan Asghar, Greg Ver Steeg, Srivatsan Ravi, Muhammad Naveed, Paul M. Thompson, Jose Luis Ambite

Each site trains the neural network over its private data for some time, then shares the neural network parameters (i. e., weights, gradients) with a Federation Controller, which in turn aggregates the local models, sends the resulting community model back to each site, and the process repeats.

Federated Learning

Federated Progressive Sparsification (Purge, Merge, Tune)+

no code implementations26 Apr 2022 Dimitris Stripelis, Umang Gupta, Greg Ver Steeg, Jose Luis Ambite

Second, the models are incrementally constrained to a smaller set of parameters, which facilitates alignment/merging of the local models and improved learning performance at high sparsification rates.

Federated Named Entity Recognition

no code implementations28 Mar 2022 Joel Mathew, Dimitris Stripelis, José Luis Ambite

We present an analysis of the performance of Federated Learning in a paradigmatic natural-language processing task: Named-Entity Recognition (NER).

Federated Learning named-entity-recognition +2

Secure Neuroimaging Analysis using Federated Learning with Homomorphic Encryption

no code implementations7 Aug 2021 Dimitris Stripelis, Hamza Saleem, Tanmay Ghai, Nikhil Dhinagar, Umang Gupta, Chrysovalantis Anastasiou, Greg Ver Steeg, Srivatsan Ravi, Muhammad Naveed, Paul M. Thompson, Jose Luis Ambite

Federated learning (FL) enables distributed computation of machine learning models over various disparate, remote data sources, without requiring to transfer any individual data to a centralized location.

Benchmarking Federated Learning

Membership Inference Attacks on Deep Regression Models for Neuroimaging

no code implementations6 May 2021 Umang Gupta, Dimitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg

In particular, we show that it is possible to infer if a sample was used to train the model given only access to the model prediction (black-box) or access to the model itself (white-box) and some leaked samples from the training data distribution.

Deep Learning Federated Learning +1

Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings

no code implementations4 Feb 2021 Dimitris Stripelis, Jose Luis Ambite

There are situations where data relevant to machine learning problems are distributed across multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons.

BIG-bench Machine Learning Federated Learning

Accelerating Federated Learning in Heterogeneous Data and Computational Environments

no code implementations25 Aug 2020 Dimitris Stripelis, Jose Luis Ambite

There are situations where data relevant to a machine learning problem are distributed among multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.