Compare commits
	
		
			15 Commits
		
	
	
	| Author | SHA1 | Date | 
|---|---|---|
|  Shahul ES | 7133670252 | |
|  Izam Mohammed | 40031ab084 | |
|  Izam Mohammed | 379800d3f6 | |
|  Shahul ES | f21fa24f0e | |
|  Izam Mohammed | afa89749ad | |
|  Izam Mohammed | e6fb143c8f | |
|  Shahul ES | cd7c008d34 | |
|  Shahul ES | 287df5bff4 | |
|  shahules786 | b56cdf877a | |
|  Shahul ES | 915574bd30 | |
|  shahules786 | c0cdb9e6e9 | |
|  Shahul ES | d57ef2c10a | |
|  shahules786 | 0faa06027f | |
|  shahules786 | c88a87e109 | |
|  Shahul ES | fe82b398ee | 
|  | @ -0,0 +1,46 @@ | ||||||
|  | # Contributing | ||||||
|  | 
 | ||||||
|  | Hi there 👋 | ||||||
|  | 
 | ||||||
|  | If you're reading this I hope that you're looking forward to adding value to Mayavoz. This document will help you to get started with your journey. | ||||||
|  | 
 | ||||||
|  | ## How to get your code in Mayavoz | ||||||
|  | 
 | ||||||
|  | 1. We use git and GitHub. | ||||||
|  | 
 | ||||||
|  | 2. Fork the mayavoz repository (https://github.com/shahules786/mayavoz) on GitHub under your own account. (This creates a copy of mayavoz under your account, and GitHub knows where it came from, and we typically call this “upstream”.) | ||||||
|  | 
 | ||||||
|  | 3. Clone your own mayavoz repository. git clone https://github.com/ <your-account> /mayavoz (This downloads the git repository to your machine, git knows where it came from, and calls it “origin”.) | ||||||
|  | 
 | ||||||
|  | 4. Create a branch for each specific feature you are developing. git checkout -b your-branch-name | ||||||
|  | 
 | ||||||
|  | 5. Make + commit changes. git add files-you-changed ... git commit -m "Short message about what you did" | ||||||
|  | 
 | ||||||
|  | 6. Push the branch to your GitHub repository. git push origin your-branch-name | ||||||
|  | 
 | ||||||
|  | 7. Navigate to GitHub, and create a pull request from your branch to the upstream repository mayavoz/mayavoz, to the “develop” branch. | ||||||
|  | 
 | ||||||
|  | 8. The Pull Request (PR) appears on the upstream repository. Discuss your contribution there. If you push more changes to your branch on GitHub (on your repository), they are added to the PR. | ||||||
|  | 
 | ||||||
|  | 9. When the reviewer is satisfied that the code improves repository quality, they can merge. | ||||||
|  | 
 | ||||||
|  | Note that CI tests will be run when you create a PR. If you want to be sure that your code will not fail these tests, we have set up pre-commit hooks that you can install. | ||||||
|  | 
 | ||||||
|  | **If you're worried about things not being perfect with your code, we will work togethor and make it perfect. So, make your move!** | ||||||
|  | 
 | ||||||
|  | ## Formating | ||||||
|  | 
 | ||||||
|  | We use [black](https://black.readthedocs.io/en/stable/) and [flake8](https://flake8.pycqa.org/en/latest/) for code formating. Please ensure that you use the same before submitting the PR. | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | ## Testing | ||||||
|  | We adopt unit testing using [pytest](https://docs.pytest.org/en/latest/contents.html) | ||||||
|  | Please make sure that adding your new component does not decrease test coverage. | ||||||
|  | 
 | ||||||
|  | ## Other tools | ||||||
|  | The use of [per-commit](https://pre-commit.com/) is recommended to ensure different requirements such as code formating, etc. | ||||||
|  | 
 | ||||||
|  | ## How to start contributing to Mayavoz? | ||||||
|  | 
 | ||||||
|  | 1. Checkout issues marked as `good first issue`, let us know you're interested in working on some issue by commenting under it. | ||||||
|  | 2. For others, I would suggest you to explore mayavoz. One way to do is to use it to train your own model. This was you might end by finding a new unreported bug or getting an idea to improve Mayavoz. | ||||||
							
								
								
									
										3
									
								
								LICENSE
								
								
								
								
							
							
						
						
									
										3
									
								
								LICENSE
								
								
								
								
							|  | @ -1,7 +1,6 @@ | ||||||
| MIT License | MIT License | ||||||
| 
 | 
 | ||||||
| Copyright (c) 2019 Pariente Manuel | Copyright (c) 2022 Shahul Es | ||||||
| 
 |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy | Permission is hereby granted, free of charge, to any person obtaining a copy | ||||||
| of this software and associated documentation files (the "Software"), to deal | of this software and associated documentation files (the "Software"), to deal | ||||||
| in the Software without restriction, including without limitation the rights | in the Software without restriction, including without limitation the rights | ||||||
|  |  | ||||||
							
								
								
									
										14
									
								
								README.md
								
								
								
								
							
							
						
						
									
										14
									
								
								README.md
								
								
								
								
							|  | @ -2,26 +2,26 @@ | ||||||
|   <img src="https://user-images.githubusercontent.com/25312635/195514652-e4526cd1-1177-48e9-a80d-c8bfdb95d35f.png" /> |   <img src="https://user-images.githubusercontent.com/25312635/195514652-e4526cd1-1177-48e9-a80d-c8bfdb95d35f.png" /> | ||||||
| </p> | </p> | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  |  | ||||||
|  |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 
 | 
 | ||||||
| mayavoz is a Pytorch-based opensource toolkit for speech enhancement. It is designed to save time for audio researchers. Is provides easy to use pretrained audio enhancement models and facilitates highly customisable model training. | mayavoz is a Pytorch-based opensource toolkit for speech enhancement. It is designed to save time for audio practioners & researchers. It provides easy to use pretrained speech enhancement models and facilitates highly customisable model training. | ||||||
| 
 | 
 | ||||||
| | **[Quick Start](#quick-start-fire)** | **[Installation](#installation)** | **[Tutorials](https://github.com/shahules786/enhancer/tree/main/notebooks)** | **[Available Recipes](#recipes)** | **[Demo](#demo)** | | **[Quick Start](#quick-start-fire)** | **[Installation](#installation)** | **[Tutorials](https://github.com/shahules786/enhancer/tree/main/notebooks)** | **[Available Recipes](#recipes)** | **[Demo](#demo)** | ||||||
| ## Key features :key: | ## Key features :key: | ||||||
| 
 | 
 | ||||||
| * Various pretrained models nicely integrated with huggingface 	:hugs: that users can select and use without any hastle. | * Various pretrained models nicely integrated with [huggingface hub](https://huggingface.co/docs/hub/index) :hugs: that users can select and use without any hastle. | ||||||
| * :package: Ability to train and validation your own custom speech enhancement models with just under 10 lines of code! | * :package: Ability to train and validate your own custom speech enhancement models with just under 10 lines of code! | ||||||
| * :magic_wand: A command line tool that facilitates training of highly customisable speech enhacement models from the terminal itself! | * :magic_wand: A command line tool that facilitates training of highly customisable speech enhacement models from the terminal itself! | ||||||
| * :zap: Supports multi-gpu training integrated with Pytorch Lightning. | * :zap: Supports multi-gpu training integrated with [Pytorch Lightning](https://pytorchlightning.ai/). | ||||||
|  | * :shield: data augmentations integrated using [torch-augmentations](https://github.com/asteroid-team/torch-audiomentations) | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| ## Demo | ## Demo | ||||||
| 
 | 
 | ||||||
| Noisy audio followed by enhanced audio. | Noisy speech followed by enhanced version. | ||||||
| 
 | 
 | ||||||
| https://user-images.githubusercontent.com/25312635/203756185-737557f4-6e21-4146-aa2c-95da69d0de4c.mp4 | https://user-images.githubusercontent.com/25312635/203756185-737557f4-6e21-4146-aa2c-95da69d0de4c.mp4 | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  | @ -95,6 +95,7 @@ class Inference: | ||||||
|     ): |     ): | ||||||
|         """ |         """ | ||||||
|         stitch batched waveform into single waveform. (Overlap-add) |         stitch batched waveform into single waveform. (Overlap-add) | ||||||
|  |         inspired from https://github.com/asteroid-team/asteroid | ||||||
|         arguments: |         arguments: | ||||||
|             data: batched waveform |             data: batched waveform | ||||||
|             window_size : window_size used to batch waveform |             window_size : window_size used to batch waveform | ||||||
|  |  | ||||||
|  | @ -129,7 +129,7 @@ class ComplexConvTranspose2d(nn.Module): | ||||||
|         imag_real = self.real_conv(imag) |         imag_real = self.real_conv(imag) | ||||||
| 
 | 
 | ||||||
|         real = real_real - imag_imag |         real = real_real - imag_imag | ||||||
|         imag = real_imag - imag_real |         imag = real_imag + imag_real | ||||||
| 
 | 
 | ||||||
|         out = torch.cat([real, imag], 1) |         out = torch.cat([real, imag], 1) | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
		Loading…
	
		Reference in New Issue