mirror of
https://github.com/openai/shap-e.git
synced 2026-02-02 17:59:50 +08:00
add tip
This commit is contained in:
@@ -68,6 +68,6 @@ Install with `pip install -e .`.
|
||||
|
||||
To get started with examples, see the following notebooks:
|
||||
|
||||
* [sample_text_to_3d.ipynb](shap_e/examples/sample_text_to_3d.ipynb) - sample a 3D model, conditioned on a text prompt
|
||||
* [sample_image_to_3d.ipynb](shap_e/examples/sample_image_to_3d.ipynb) - sample a 3D model, conditioned on an synthetic view image.
|
||||
* [sample_text_to_3d.ipynb](shap_e/examples/sample_text_to_3d.ipynb) - sample a 3D model, conditioned on a text prompt.
|
||||
* [sample_image_to_3d.ipynb](shap_e/examples/sample_image_to_3d.ipynb) - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
|
||||
* [encode_model.ipynb](shap_e/examples/encode_model.ipynb) - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable `BLENDER_PATH` to the path of the Blender executable.
|
||||
|
||||
@@ -48,6 +48,7 @@
|
||||
"batch_size = 4\n",
|
||||
"guidance_scale = 3.0\n",
|
||||
"\n",
|
||||
"# To get the best result, you should remove the background and show only the object of interest to the model.\n",
|
||||
"image = load_image(\"example_data/corgi.png\")\n",
|
||||
"\n",
|
||||
"latents = sample_latents(\n",
|
||||
|
||||
Reference in New Issue
Block a user