Commit Graph

82 Commits

Author SHA1 Message Date
Nathaniel Simard 95e660488e
Refactor/burn compute wgpu (#826) 2023-09-25 10:42:45 -04:00
Louis Fortier-Dubois 8c215e8be3
Bugfix/int swap dims (#823) 2023-09-22 08:38:38 -04:00
Juliano Decico Negri 293020aae6
#384 Include tests for int.rs and float.rs (#794) 2023-09-21 09:00:09 -04:00
Nathaniel Simard ac4adb54ea
Burn compute (#809) 2023-09-18 19:56:53 -04:00
Nathaniel Simard af0be5cfeb
Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
Nathaniel Simard c95b34c511
Book: backend extension + custom wgpu kernel (#728) 2023-08-31 09:55:43 -04:00
Louis Fortier-Dubois c89f9969ed
Perf/tensor ops/tests (#710) 2023-08-28 12:53:17 -04:00
Mathias Insley d2aa4c0c9d
Perf/Empty Context Cache (#676)
* Add a pipeline_counter and methods for process of retaining best kernel

* Put a tune flag on the Context

* Put counts into cache instead of using pipeline_counter

* Formatting

* Add optimize_cache flag and rework ComputePipeline clearing process

* Update tune() so that it starts Context tuning and flags the Context as ready for clearing

* Consistent single quotes

* Use AtomicBool for is_tuning, prevent caching during tuning

* Collect TemplateIds during tuning and clean them out after tuning

* Fix comment

* Move cache cleanup to stop_tuning function
2023-08-28 10:04:05 -04:00
MOZGIII 7f558bdc46
Expose element traits (#700) 2023-08-27 09:02:39 -04:00
Louis Fortier-Dubois fb2a71bb81
remove to device (#694) 2023-08-25 09:55:18 -04:00
Jerome Robert edb3e9fc4b
Do not use default device when running kernel::matmul::tune (#684)
Use the device of the involved Tensor instead of Device::default
2023-08-24 14:01:27 -04:00
Nathaniel Simard d18d1b0bb9
Can configure wgpu max tasks (#603) 2023-08-23 12:20:27 -04:00
Caio Piccirillo 2fefc82099
Dilation maxpool (#668) 2023-08-21 14:14:25 -04:00
Nathaniel Simard bda03c6a76
Feat/avg pool/include pad config (#653) 2023-08-17 08:50:31 -04:00
Louis Fortier-Dubois d659f11639
Perf/wgpu/autotune (#609) 2023-08-15 11:26:00 -04:00
Nathaniel Simard c74e75f748
Fix/wgpu/max pool2d backward (#613) 2023-08-09 16:45:49 -04:00
Caio Piccirillo 1d3bbaab13
Typos (#608) 2023-08-08 17:57:51 -04:00
Nathaniel Simard 441a7011ce
Feat/tensor casting (#604) 2023-08-08 10:02:17 -04:00
Nathaniel Simard 8bc687e1bb
WGPU use best limits for the adaptor (#601) 2023-08-07 15:10:21 -04:00
Gadersd ed255c5561
Use buffered io for massive performance gains when loading and saving… (#593) 2023-08-06 12:56:27 -04:00
Nathaniel Simard 8436d4ff66
Feat/tensor/adaptive avg pool2d (#572) 2023-08-04 10:23:59 -04:00
dependabot[bot] b7ad23bd87
Update serial_test requirement from 0.5.0 to 2.0.0 (#579)
Updates the requirements on [serial_test](https://github.com/palfrey/serial_test) to permit the latest version.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v0.5.0...v2.0.0)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-03 09:21:47 -04:00
Louis Fortier-Dubois d5f9f69cea
Refactor/wgpu/prng (#576) 2023-08-02 16:08:50 -04:00
mmalczak 73fb0eaa7e
Addition of abs tensor opperator #506 (#553) 2023-08-01 18:25:14 -04:00
Louis Fortier-Dubois 87125da6c9
Feat/wgpu/prng bernoulli (#571) 2023-08-01 12:54:22 -04:00
Louis Fortier-Dubois a69788ad4b
Feat/wgpu/prng normal (#570) 2023-08-01 10:59:49 -04:00
Louis Fortier-Dubois daedec6f6d
Feat/wgpu/prng (#560)
* wip

* wip

* default prng working but not perfectly random

* format

* format

* fix prng algo

* clippy

* uniform prng

* refactor tests for prng

* cleanup
2023-08-01 09:26:49 -04:00
Louis Fortier-Dubois aa4af29e3f
Matmul speedup (contiguous load) (#559) 2023-07-28 10:41:27 -04:00
Dilshod Tadjibaev 74c41bdda2
Add clamp, clamp_min, clamp_max tensor ops (#550) 2023-07-26 20:02:38 -04:00
Nathaniel Simard 0a5a2d729a
chore: bump version for next release (#533) 2023-07-26 09:46:28 -04:00
Louis Fortier-Dubois 589b4503df
add wgpu readme (#531) 2023-07-25 10:44:53 -04:00
Louis Fortier-Dubois 7154bde53a
patch tanh bug on mac os (#520) 2023-07-24 19:29:10 -04:00
Nathaniel Simard eaef215b17
Feat: wgpu cast tensor type (#515) 2023-07-24 11:50:44 -04:00
Louis Fortier-Dubois 9aca1837c2
Example/wgpu/mnist (#514)
* add wgpu for mnist

* auto graphics api

* fix display tests

* clipy
2023-07-20 17:12:13 -04:00
Nathaniel Simard d7ce52f0da
Feat/wgpu/conv (#512) 2023-07-20 15:14:42 -04:00
Nathaniel Simard 3a153d5bd0
Feat/wgpu/default device (#513) 2023-07-20 15:07:50 -04:00
Louis Fortier-Dubois df2f1492f8
Feat/wgpu/matmul transpose (#509) 2023-07-20 14:21:58 -04:00
Louis Fortier-Dubois 4b60c0e7a0
continuous to contiguous (#511) 2023-07-20 11:28:35 -04:00
Louis Fortier-Dubois 57a5476c89
bugfix for macos test (#503) 2023-07-18 16:15:00 -04:00
Louis Fortier-Dubois 5ece894e02
Bugfix/matmul/asymetric shapes (#504) 2023-07-18 16:14:43 -04:00
Nathaniel Simard f7c7d35ef5
Feat/wgpu/avg pooling (#502) 2023-07-18 11:36:57 -04:00
Nathaniel Simard c4afff182f
Feat/wgpu/max pool2d (#500) 2023-07-14 13:58:08 -04:00
Dilshod Tadjibaev e267fc1e6f
Temporarily disable broken tests for M1 Mac (#491)
A temp workaround for #480
2023-07-13 17:10:26 -04:00
Dilshod Tadjibaev 53c088209d
Fix new clippy warnings that cause the CI to fail (#494) 2023-07-13 13:39:39 -04:00
Nathaniel Simard a2ac2057d8
Fix: wgpu scatter with different shapes (#489) 2023-07-12 13:21:19 -04:00
Nathaniel Simard ddbbe39d74
Fix: wgpu cat + copy ops (#477) 2023-07-07 12:19:10 -04:00
Nathaniel Simard 513b9281c2
Feat/matmul/faster (#479) 2023-07-07 12:00:37 -04:00
Nathaniel Simard 261aa952c0
Add data benchmark (#474) 2023-07-07 10:21:24 -04:00
Nathaniel Simard 017485b9ea
Refactor/wgpu/ops (#472) 2023-07-06 14:26:27 -04:00
Nathaniel Simard 04ad14a32a
refactor: wgpu reductions (#471) 2023-07-06 11:40:37 -04:00